Test Report: KVM_Linux_crio 19128

                    
                      41623ece558473adce14b68ca67ade23eff6d1a3:2024-06-25:35038
                    
                

Test fail (12/207)

x
+
TestAddons/Setup (2400.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-739670 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-739670 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.958881382s)

                                                
                                                
-- stdout --
	* [addons-739670] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-739670" primary control-plane node in "addons-739670" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image docker.io/marcnuri/yakd:0.0.4
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image docker.io/busybox:stable
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-739670 service yakd-dashboard -n yakd-dashboard
	
	* Verifying registry addon...
	* Verifying ingress addon...
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	* Verifying csi-hostpath-driver addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-739670 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: storage-provisioner, cloud-spanner, helm-tiller, ingress-dns, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 15:10:31.454577   22036 out.go:291] Setting OutFile to fd 1 ...
	I0625 15:10:31.454859   22036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:10:31.454870   22036 out.go:304] Setting ErrFile to fd 2...
	I0625 15:10:31.454874   22036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:10:31.455063   22036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 15:10:31.455622   22036 out.go:298] Setting JSON to false
	I0625 15:10:31.456645   22036 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3175,"bootTime":1719325056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 15:10:31.456709   22036 start.go:139] virtualization: kvm guest
	I0625 15:10:31.481219   22036 out.go:177] * [addons-739670] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0625 15:10:31.497651   22036 notify.go:220] Checking for updates...
	I0625 15:10:31.497681   22036 out.go:177]   - MINIKUBE_LOCATION=19128
	I0625 15:10:31.498986   22036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 15:10:31.500207   22036 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:10:31.501385   22036 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:10:31.502659   22036 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0625 15:10:31.503836   22036 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0625 15:10:31.505156   22036 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 15:10:31.535473   22036 out.go:177] * Using the kvm2 driver based on user configuration
	I0625 15:10:31.536550   22036 start.go:297] selected driver: kvm2
	I0625 15:10:31.536572   22036 start.go:901] validating driver "kvm2" against <nil>
	I0625 15:10:31.536588   22036 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0625 15:10:31.537253   22036 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 15:10:31.537328   22036 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19128-13846/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0625 15:10:31.551447   22036 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0625 15:10:31.551505   22036 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0625 15:10:31.551722   22036 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 15:10:31.551775   22036 cni.go:84] Creating CNI manager for ""
	I0625 15:10:31.551787   22036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 15:10:31.551793   22036 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0625 15:10:31.551850   22036 start.go:340] cluster config:
	{Name:addons-739670 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-739670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 15:10:31.551931   22036 iso.go:125] acquiring lock: {Name:mk76df652d5e768afc73443035d5ecb8b75ed16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 15:10:31.553366   22036 out.go:177] * Starting "addons-739670" primary control-plane node in "addons-739670" cluster
	I0625 15:10:31.554408   22036 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 15:10:31.554439   22036 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0625 15:10:31.554449   22036 cache.go:56] Caching tarball of preloaded images
	I0625 15:10:31.554540   22036 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 15:10:31.554551   22036 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0625 15:10:31.554813   22036 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/config.json ...
	I0625 15:10:31.554830   22036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/config.json: {Name:mkad7f3d2e15c5133a56688031a3786d9bb97c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:10:31.554950   22036 start.go:360] acquireMachinesLock for addons-739670: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 15:10:31.554994   22036 start.go:364] duration metric: took 31.223µs to acquireMachinesLock for "addons-739670"
	I0625 15:10:31.555010   22036 start.go:93] Provisioning new machine with config: &{Name:addons-739670 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-739670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:10:31.555057   22036 start.go:125] createHost starting for "" (driver="kvm2")
	I0625 15:10:31.556443   22036 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0625 15:10:31.556573   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:10:31.556610   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:10:31.570020   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0625 15:10:31.570450   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:10:31.571043   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:10:31.571066   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:10:31.571400   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:10:31.571612   22036 main.go:141] libmachine: (addons-739670) Calling .GetMachineName
	I0625 15:10:31.571772   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:10:31.571920   22036 start.go:159] libmachine.API.Create for "addons-739670" (driver="kvm2")
	I0625 15:10:31.571949   22036 client.go:168] LocalClient.Create starting
	I0625 15:10:31.571988   22036 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem
	I0625 15:10:31.777779   22036 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem
	I0625 15:10:31.919462   22036 main.go:141] libmachine: Running pre-create checks...
	I0625 15:10:31.919484   22036 main.go:141] libmachine: (addons-739670) Calling .PreCreateCheck
	I0625 15:10:31.919953   22036 main.go:141] libmachine: (addons-739670) Calling .GetConfigRaw
	I0625 15:10:31.920399   22036 main.go:141] libmachine: Creating machine...
	I0625 15:10:31.920414   22036 main.go:141] libmachine: (addons-739670) Calling .Create
	I0625 15:10:31.920542   22036 main.go:141] libmachine: (addons-739670) Creating KVM machine...
	I0625 15:10:31.921687   22036 main.go:141] libmachine: (addons-739670) DBG | found existing default KVM network
	I0625 15:10:31.922436   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:31.922292   22058 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0625 15:10:31.922505   22036 main.go:141] libmachine: (addons-739670) DBG | created network xml: 
	I0625 15:10:31.922535   22036 main.go:141] libmachine: (addons-739670) DBG | <network>
	I0625 15:10:31.922569   22036 main.go:141] libmachine: (addons-739670) DBG |   <name>mk-addons-739670</name>
	I0625 15:10:31.922595   22036 main.go:141] libmachine: (addons-739670) DBG |   <dns enable='no'/>
	I0625 15:10:31.922609   22036 main.go:141] libmachine: (addons-739670) DBG |   
	I0625 15:10:31.922622   22036 main.go:141] libmachine: (addons-739670) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0625 15:10:31.922632   22036 main.go:141] libmachine: (addons-739670) DBG |     <dhcp>
	I0625 15:10:31.922641   22036 main.go:141] libmachine: (addons-739670) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0625 15:10:31.922663   22036 main.go:141] libmachine: (addons-739670) DBG |     </dhcp>
	I0625 15:10:31.922677   22036 main.go:141] libmachine: (addons-739670) DBG |   </ip>
	I0625 15:10:31.922712   22036 main.go:141] libmachine: (addons-739670) DBG |   
	I0625 15:10:31.922739   22036 main.go:141] libmachine: (addons-739670) DBG | </network>
	I0625 15:10:31.922752   22036 main.go:141] libmachine: (addons-739670) DBG | 
	I0625 15:10:31.927826   22036 main.go:141] libmachine: (addons-739670) DBG | trying to create private KVM network mk-addons-739670 192.168.39.0/24...
	I0625 15:10:31.989793   22036 main.go:141] libmachine: (addons-739670) DBG | private KVM network mk-addons-739670 192.168.39.0/24 created
	I0625 15:10:31.989827   22036 main.go:141] libmachine: (addons-739670) Setting up store path in /home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670 ...
	I0625 15:10:31.989851   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:31.989746   22058 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:10:31.989867   22036 main.go:141] libmachine: (addons-739670) Building disk image from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso
	I0625 15:10:31.989921   22036 main.go:141] libmachine: (addons-739670) Downloading /home/jenkins/minikube-integration/19128-13846/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso...
	I0625 15:10:32.242390   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:32.242260   22058 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa...
	I0625 15:10:32.301161   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:32.301060   22058 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/addons-739670.rawdisk...
	I0625 15:10:32.301191   22036 main.go:141] libmachine: (addons-739670) DBG | Writing magic tar header
	I0625 15:10:32.301205   22036 main.go:141] libmachine: (addons-739670) DBG | Writing SSH key tar header
	I0625 15:10:32.301219   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:32.301171   22058 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670 ...
	I0625 15:10:32.301271   22036 main.go:141] libmachine: (addons-739670) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670
	I0625 15:10:32.301313   22036 main.go:141] libmachine: (addons-739670) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines
	I0625 15:10:32.301333   22036 main.go:141] libmachine: (addons-739670) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670 (perms=drwx------)
	I0625 15:10:32.301345   22036 main.go:141] libmachine: (addons-739670) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:10:32.301356   22036 main.go:141] libmachine: (addons-739670) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846
	I0625 15:10:32.301361   22036 main.go:141] libmachine: (addons-739670) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0625 15:10:32.301367   22036 main.go:141] libmachine: (addons-739670) DBG | Checking permissions on dir: /home/jenkins
	I0625 15:10:32.301373   22036 main.go:141] libmachine: (addons-739670) DBG | Checking permissions on dir: /home
	I0625 15:10:32.301404   22036 main.go:141] libmachine: (addons-739670) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines (perms=drwxr-xr-x)
	I0625 15:10:32.301420   22036 main.go:141] libmachine: (addons-739670) DBG | Skipping /home - not owner
	I0625 15:10:32.301434   22036 main.go:141] libmachine: (addons-739670) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube (perms=drwxr-xr-x)
	I0625 15:10:32.301446   22036 main.go:141] libmachine: (addons-739670) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846 (perms=drwxrwxr-x)
	I0625 15:10:32.301455   22036 main.go:141] libmachine: (addons-739670) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0625 15:10:32.301461   22036 main.go:141] libmachine: (addons-739670) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0625 15:10:32.301467   22036 main.go:141] libmachine: (addons-739670) Creating domain...
	I0625 15:10:32.302385   22036 main.go:141] libmachine: (addons-739670) define libvirt domain using xml: 
	I0625 15:10:32.302401   22036 main.go:141] libmachine: (addons-739670) <domain type='kvm'>
	I0625 15:10:32.302412   22036 main.go:141] libmachine: (addons-739670)   <name>addons-739670</name>
	I0625 15:10:32.302420   22036 main.go:141] libmachine: (addons-739670)   <memory unit='MiB'>4000</memory>
	I0625 15:10:32.302429   22036 main.go:141] libmachine: (addons-739670)   <vcpu>2</vcpu>
	I0625 15:10:32.302438   22036 main.go:141] libmachine: (addons-739670)   <features>
	I0625 15:10:32.302445   22036 main.go:141] libmachine: (addons-739670)     <acpi/>
	I0625 15:10:32.302454   22036 main.go:141] libmachine: (addons-739670)     <apic/>
	I0625 15:10:32.302462   22036 main.go:141] libmachine: (addons-739670)     <pae/>
	I0625 15:10:32.302494   22036 main.go:141] libmachine: (addons-739670)     
	I0625 15:10:32.302520   22036 main.go:141] libmachine: (addons-739670)   </features>
	I0625 15:10:32.302538   22036 main.go:141] libmachine: (addons-739670)   <cpu mode='host-passthrough'>
	I0625 15:10:32.302545   22036 main.go:141] libmachine: (addons-739670)   
	I0625 15:10:32.302556   22036 main.go:141] libmachine: (addons-739670)   </cpu>
	I0625 15:10:32.302561   22036 main.go:141] libmachine: (addons-739670)   <os>
	I0625 15:10:32.302568   22036 main.go:141] libmachine: (addons-739670)     <type>hvm</type>
	I0625 15:10:32.302573   22036 main.go:141] libmachine: (addons-739670)     <boot dev='cdrom'/>
	I0625 15:10:32.302580   22036 main.go:141] libmachine: (addons-739670)     <boot dev='hd'/>
	I0625 15:10:32.302586   22036 main.go:141] libmachine: (addons-739670)     <bootmenu enable='no'/>
	I0625 15:10:32.302593   22036 main.go:141] libmachine: (addons-739670)   </os>
	I0625 15:10:32.302608   22036 main.go:141] libmachine: (addons-739670)   <devices>
	I0625 15:10:32.302619   22036 main.go:141] libmachine: (addons-739670)     <disk type='file' device='cdrom'>
	I0625 15:10:32.302628   22036 main.go:141] libmachine: (addons-739670)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/boot2docker.iso'/>
	I0625 15:10:32.302636   22036 main.go:141] libmachine: (addons-739670)       <target dev='hdc' bus='scsi'/>
	I0625 15:10:32.302641   22036 main.go:141] libmachine: (addons-739670)       <readonly/>
	I0625 15:10:32.302653   22036 main.go:141] libmachine: (addons-739670)     </disk>
	I0625 15:10:32.302660   22036 main.go:141] libmachine: (addons-739670)     <disk type='file' device='disk'>
	I0625 15:10:32.302668   22036 main.go:141] libmachine: (addons-739670)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0625 15:10:32.302676   22036 main.go:141] libmachine: (addons-739670)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/addons-739670.rawdisk'/>
	I0625 15:10:32.302687   22036 main.go:141] libmachine: (addons-739670)       <target dev='hda' bus='virtio'/>
	I0625 15:10:32.302694   22036 main.go:141] libmachine: (addons-739670)     </disk>
	I0625 15:10:32.302699   22036 main.go:141] libmachine: (addons-739670)     <interface type='network'>
	I0625 15:10:32.302708   22036 main.go:141] libmachine: (addons-739670)       <source network='mk-addons-739670'/>
	I0625 15:10:32.302713   22036 main.go:141] libmachine: (addons-739670)       <model type='virtio'/>
	I0625 15:10:32.302718   22036 main.go:141] libmachine: (addons-739670)     </interface>
	I0625 15:10:32.302726   22036 main.go:141] libmachine: (addons-739670)     <interface type='network'>
	I0625 15:10:32.302731   22036 main.go:141] libmachine: (addons-739670)       <source network='default'/>
	I0625 15:10:32.302736   22036 main.go:141] libmachine: (addons-739670)       <model type='virtio'/>
	I0625 15:10:32.302742   22036 main.go:141] libmachine: (addons-739670)     </interface>
	I0625 15:10:32.302752   22036 main.go:141] libmachine: (addons-739670)     <serial type='pty'>
	I0625 15:10:32.302759   22036 main.go:141] libmachine: (addons-739670)       <target port='0'/>
	I0625 15:10:32.302764   22036 main.go:141] libmachine: (addons-739670)     </serial>
	I0625 15:10:32.302772   22036 main.go:141] libmachine: (addons-739670)     <console type='pty'>
	I0625 15:10:32.302777   22036 main.go:141] libmachine: (addons-739670)       <target type='serial' port='0'/>
	I0625 15:10:32.302784   22036 main.go:141] libmachine: (addons-739670)     </console>
	I0625 15:10:32.302789   22036 main.go:141] libmachine: (addons-739670)     <rng model='virtio'>
	I0625 15:10:32.302797   22036 main.go:141] libmachine: (addons-739670)       <backend model='random'>/dev/random</backend>
	I0625 15:10:32.302804   22036 main.go:141] libmachine: (addons-739670)     </rng>
	I0625 15:10:32.302809   22036 main.go:141] libmachine: (addons-739670)     
	I0625 15:10:32.302819   22036 main.go:141] libmachine: (addons-739670)     
	I0625 15:10:32.302835   22036 main.go:141] libmachine: (addons-739670)   </devices>
	I0625 15:10:32.302853   22036 main.go:141] libmachine: (addons-739670) </domain>
	I0625 15:10:32.302867   22036 main.go:141] libmachine: (addons-739670) 
	I0625 15:10:32.308627   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:17:a1:20 in network default
	I0625 15:10:32.309049   22036 main.go:141] libmachine: (addons-739670) Ensuring networks are active...
	I0625 15:10:32.309069   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:32.309688   22036 main.go:141] libmachine: (addons-739670) Ensuring network default is active
	I0625 15:10:32.309934   22036 main.go:141] libmachine: (addons-739670) Ensuring network mk-addons-739670 is active
	I0625 15:10:32.310369   22036 main.go:141] libmachine: (addons-739670) Getting domain xml...
	I0625 15:10:32.310932   22036 main.go:141] libmachine: (addons-739670) Creating domain...
	I0625 15:10:33.668876   22036 main.go:141] libmachine: (addons-739670) Waiting to get IP...
	I0625 15:10:33.669602   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:33.669984   22036 main.go:141] libmachine: (addons-739670) DBG | unable to find current IP address of domain addons-739670 in network mk-addons-739670
	I0625 15:10:33.670011   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:33.669936   22058 retry.go:31] will retry after 248.858896ms: waiting for machine to come up
	I0625 15:10:33.920512   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:33.920910   22036 main.go:141] libmachine: (addons-739670) DBG | unable to find current IP address of domain addons-739670 in network mk-addons-739670
	I0625 15:10:33.920931   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:33.920876   22058 retry.go:31] will retry after 380.507121ms: waiting for machine to come up
	I0625 15:10:34.303346   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:34.303858   22036 main.go:141] libmachine: (addons-739670) DBG | unable to find current IP address of domain addons-739670 in network mk-addons-739670
	I0625 15:10:34.303878   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:34.303824   22058 retry.go:31] will retry after 308.524256ms: waiting for machine to come up
	I0625 15:10:34.614393   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:34.614903   22036 main.go:141] libmachine: (addons-739670) DBG | unable to find current IP address of domain addons-739670 in network mk-addons-739670
	I0625 15:10:34.614922   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:34.614865   22058 retry.go:31] will retry after 392.585094ms: waiting for machine to come up
	I0625 15:10:35.009552   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:35.009955   22036 main.go:141] libmachine: (addons-739670) DBG | unable to find current IP address of domain addons-739670 in network mk-addons-739670
	I0625 15:10:35.009982   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:35.009908   22058 retry.go:31] will retry after 578.312219ms: waiting for machine to come up
	I0625 15:10:35.589575   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:35.589956   22036 main.go:141] libmachine: (addons-739670) DBG | unable to find current IP address of domain addons-739670 in network mk-addons-739670
	I0625 15:10:35.589979   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:35.589926   22058 retry.go:31] will retry after 590.681467ms: waiting for machine to come up
	I0625 15:10:36.182658   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:36.183109   22036 main.go:141] libmachine: (addons-739670) DBG | unable to find current IP address of domain addons-739670 in network mk-addons-739670
	I0625 15:10:36.183134   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:36.183053   22058 retry.go:31] will retry after 1.138704629s: waiting for machine to come up
	I0625 15:10:37.322876   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:37.323235   22036 main.go:141] libmachine: (addons-739670) DBG | unable to find current IP address of domain addons-739670 in network mk-addons-739670
	I0625 15:10:37.323270   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:37.323209   22058 retry.go:31] will retry after 1.019205689s: waiting for machine to come up
	I0625 15:10:38.344537   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:38.344906   22036 main.go:141] libmachine: (addons-739670) DBG | unable to find current IP address of domain addons-739670 in network mk-addons-739670
	I0625 15:10:38.344932   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:38.344878   22058 retry.go:31] will retry after 1.789686948s: waiting for machine to come up
	I0625 15:10:40.136789   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:40.137167   22036 main.go:141] libmachine: (addons-739670) DBG | unable to find current IP address of domain addons-739670 in network mk-addons-739670
	I0625 15:10:40.137193   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:40.137123   22058 retry.go:31] will retry after 1.789065482s: waiting for machine to come up
	I0625 15:10:41.927813   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:41.928282   22036 main.go:141] libmachine: (addons-739670) DBG | unable to find current IP address of domain addons-739670 in network mk-addons-739670
	I0625 15:10:41.928307   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:41.928234   22058 retry.go:31] will retry after 2.617935952s: waiting for machine to come up
	I0625 15:10:44.548878   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:44.549322   22036 main.go:141] libmachine: (addons-739670) DBG | unable to find current IP address of domain addons-739670 in network mk-addons-739670
	I0625 15:10:44.549353   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:44.549275   22058 retry.go:31] will retry after 3.09143519s: waiting for machine to come up
	I0625 15:10:47.641812   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:47.642263   22036 main.go:141] libmachine: (addons-739670) DBG | unable to find current IP address of domain addons-739670 in network mk-addons-739670
	I0625 15:10:47.642286   22036 main.go:141] libmachine: (addons-739670) DBG | I0625 15:10:47.642218   22058 retry.go:31] will retry after 4.111658457s: waiting for machine to come up
	I0625 15:10:51.755351   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:51.755797   22036 main.go:141] libmachine: (addons-739670) Found IP for machine: 192.168.39.224
	I0625 15:10:51.755819   22036 main.go:141] libmachine: (addons-739670) Reserving static IP address...
	I0625 15:10:51.755840   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has current primary IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:51.756150   22036 main.go:141] libmachine: (addons-739670) DBG | unable to find host DHCP lease matching {name: "addons-739670", mac: "52:54:00:96:31:7b", ip: "192.168.39.224"} in network mk-addons-739670
	I0625 15:10:51.824853   22036 main.go:141] libmachine: (addons-739670) Reserved static IP address: 192.168.39.224
	I0625 15:10:51.824884   22036 main.go:141] libmachine: (addons-739670) DBG | Getting to WaitForSSH function...
	I0625 15:10:51.824893   22036 main.go:141] libmachine: (addons-739670) Waiting for SSH to be available...
	I0625 15:10:51.827475   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:51.827945   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:51.827976   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:51.828105   22036 main.go:141] libmachine: (addons-739670) DBG | Using SSH client type: external
	I0625 15:10:51.828175   22036 main.go:141] libmachine: (addons-739670) DBG | Using SSH private key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa (-rw-------)
	I0625 15:10:51.828229   22036 main.go:141] libmachine: (addons-739670) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0625 15:10:51.828244   22036 main.go:141] libmachine: (addons-739670) DBG | About to run SSH command:
	I0625 15:10:51.828257   22036 main.go:141] libmachine: (addons-739670) DBG | exit 0
	I0625 15:10:51.954519   22036 main.go:141] libmachine: (addons-739670) DBG | SSH cmd err, output: <nil>: 
	I0625 15:10:51.954796   22036 main.go:141] libmachine: (addons-739670) KVM machine creation complete!
	I0625 15:10:51.955034   22036 main.go:141] libmachine: (addons-739670) Calling .GetConfigRaw
	I0625 15:10:51.955720   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:10:51.955911   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:10:51.956063   22036 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0625 15:10:51.956078   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:10:51.957605   22036 main.go:141] libmachine: Detecting operating system of created instance...
	I0625 15:10:51.957621   22036 main.go:141] libmachine: Waiting for SSH to be available...
	I0625 15:10:51.957632   22036 main.go:141] libmachine: Getting to WaitForSSH function...
	I0625 15:10:51.957638   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:10:51.959684   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:51.960037   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:51.960075   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:51.960183   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:10:51.960359   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:51.960511   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:51.960686   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:10:51.960854   22036 main.go:141] libmachine: Using SSH client type: native
	I0625 15:10:51.961015   22036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0625 15:10:51.961025   22036 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0625 15:10:52.057681   22036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 15:10:52.057705   22036 main.go:141] libmachine: Detecting the provisioner...
	I0625 15:10:52.057713   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:10:52.060536   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.060824   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:52.060850   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.060994   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:10:52.061196   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:52.061347   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:52.061505   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:10:52.061676   22036 main.go:141] libmachine: Using SSH client type: native
	I0625 15:10:52.061825   22036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0625 15:10:52.061835   22036 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0625 15:10:52.159167   22036 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0625 15:10:52.159260   22036 main.go:141] libmachine: found compatible host: buildroot
	I0625 15:10:52.159273   22036 main.go:141] libmachine: Provisioning with buildroot...
	I0625 15:10:52.159281   22036 main.go:141] libmachine: (addons-739670) Calling .GetMachineName
	I0625 15:10:52.159511   22036 buildroot.go:166] provisioning hostname "addons-739670"
	I0625 15:10:52.159537   22036 main.go:141] libmachine: (addons-739670) Calling .GetMachineName
	I0625 15:10:52.159733   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:10:52.162343   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.162742   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:52.162770   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.162888   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:10:52.163086   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:52.163286   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:52.163430   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:10:52.163584   22036 main.go:141] libmachine: Using SSH client type: native
	I0625 15:10:52.163746   22036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0625 15:10:52.163760   22036 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-739670 && echo "addons-739670" | sudo tee /etc/hostname
	I0625 15:10:52.277241   22036 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-739670
	
	I0625 15:10:52.277268   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:10:52.280023   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.280374   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:52.280399   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.280556   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:10:52.280734   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:52.280877   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:52.281009   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:10:52.281170   22036 main.go:141] libmachine: Using SSH client type: native
	I0625 15:10:52.281414   22036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0625 15:10:52.281444   22036 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-739670' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-739670/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-739670' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 15:10:52.387644   22036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 15:10:52.387674   22036 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 15:10:52.387720   22036 buildroot.go:174] setting up certificates
	I0625 15:10:52.387735   22036 provision.go:84] configureAuth start
	I0625 15:10:52.387749   22036 main.go:141] libmachine: (addons-739670) Calling .GetMachineName
	I0625 15:10:52.388032   22036 main.go:141] libmachine: (addons-739670) Calling .GetIP
	I0625 15:10:52.390642   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.391013   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:52.391043   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.391155   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:10:52.393241   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.393575   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:52.393606   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.393771   22036 provision.go:143] copyHostCerts
	I0625 15:10:52.393852   22036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 15:10:52.394014   22036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 15:10:52.394099   22036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 15:10:52.394163   22036 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.addons-739670 san=[127.0.0.1 192.168.39.224 addons-739670 localhost minikube]
	I0625 15:10:52.457844   22036 provision.go:177] copyRemoteCerts
	I0625 15:10:52.457900   22036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 15:10:52.457920   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:10:52.460552   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.460863   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:52.460902   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.461075   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:10:52.461253   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:52.461413   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:10:52.461555   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:10:52.540552   22036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 15:10:52.565659   22036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0625 15:10:52.589599   22036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0625 15:10:52.613615   22036 provision.go:87] duration metric: took 225.869078ms to configureAuth
	I0625 15:10:52.613641   22036 buildroot.go:189] setting minikube options for container-runtime
	I0625 15:10:52.613789   22036 config.go:182] Loaded profile config "addons-739670": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:10:52.613855   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:10:52.616225   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.616538   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:52.616566   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.616702   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:10:52.616856   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:52.617007   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:52.617156   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:10:52.617344   22036 main.go:141] libmachine: Using SSH client type: native
	I0625 15:10:52.617540   22036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0625 15:10:52.617557   22036 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 15:10:52.991996   22036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 15:10:52.992021   22036 main.go:141] libmachine: Checking connection to Docker...
	I0625 15:10:52.992032   22036 main.go:141] libmachine: (addons-739670) Calling .GetURL
	I0625 15:10:52.993462   22036 main.go:141] libmachine: (addons-739670) DBG | Using libvirt version 6000000
	I0625 15:10:52.995720   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.996092   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:52.996116   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.996379   22036 main.go:141] libmachine: Docker is up and running!
	I0625 15:10:52.996396   22036 main.go:141] libmachine: Reticulating splines...
	I0625 15:10:52.996405   22036 client.go:171] duration metric: took 21.424444591s to LocalClient.Create
	I0625 15:10:52.996428   22036 start.go:167] duration metric: took 21.424509157s to libmachine.API.Create "addons-739670"
	I0625 15:10:52.996460   22036 start.go:293] postStartSetup for "addons-739670" (driver="kvm2")
	I0625 15:10:52.996478   22036 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 15:10:52.996505   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:10:52.996714   22036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 15:10:52.996736   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:10:52.998492   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.998812   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:52.998830   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:52.998937   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:10:52.999108   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:52.999265   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:10:52.999385   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:10:53.076575   22036 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 15:10:53.080915   22036 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 15:10:53.080942   22036 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 15:10:53.081012   22036 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 15:10:53.081036   22036 start.go:296] duration metric: took 84.564408ms for postStartSetup
	I0625 15:10:53.081081   22036 main.go:141] libmachine: (addons-739670) Calling .GetConfigRaw
	I0625 15:10:53.081591   22036 main.go:141] libmachine: (addons-739670) Calling .GetIP
	I0625 15:10:53.085131   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:53.085486   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:53.085513   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:53.085766   22036 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/config.json ...
	I0625 15:10:53.085929   22036 start.go:128] duration metric: took 21.530861607s to createHost
	I0625 15:10:53.085948   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:10:53.088143   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:53.088462   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:53.088480   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:53.088587   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:10:53.088764   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:53.088919   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:53.089043   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:10:53.089283   22036 main.go:141] libmachine: Using SSH client type: native
	I0625 15:10:53.089449   22036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0625 15:10:53.089465   22036 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0625 15:10:53.186909   22036 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719328253.159695496
	
	I0625 15:10:53.186937   22036 fix.go:216] guest clock: 1719328253.159695496
	I0625 15:10:53.186947   22036 fix.go:229] Guest: 2024-06-25 15:10:53.159695496 +0000 UTC Remote: 2024-06-25 15:10:53.085938954 +0000 UTC m=+21.664397718 (delta=73.756542ms)
	I0625 15:10:53.186996   22036 fix.go:200] guest clock delta is within tolerance: 73.756542ms
	I0625 15:10:53.187002   22036 start.go:83] releasing machines lock for "addons-739670", held for 21.631997992s
	I0625 15:10:53.187027   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:10:53.187293   22036 main.go:141] libmachine: (addons-739670) Calling .GetIP
	I0625 15:10:53.189720   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:53.190107   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:53.190127   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:53.190355   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:10:53.190887   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:10:53.191088   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:10:53.191165   22036 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 15:10:53.191210   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:10:53.191328   22036 ssh_runner.go:195] Run: cat /version.json
	I0625 15:10:53.191355   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:10:53.193664   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:53.194012   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:53.194037   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:53.194124   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:53.194147   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:10:53.194314   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:53.194498   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:53.194520   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:53.194499   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:10:53.194649   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:10:53.194717   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:10:53.194847   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:10:53.194956   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:10:53.195096   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:10:53.293182   22036 ssh_runner.go:195] Run: systemctl --version
	I0625 15:10:53.299229   22036 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 15:10:53.455803   22036 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0625 15:10:53.461787   22036 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 15:10:53.461856   22036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 15:10:53.477701   22036 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0625 15:10:53.477723   22036 start.go:494] detecting cgroup driver to use...
	I0625 15:10:53.477789   22036 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 15:10:53.493071   22036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 15:10:53.506367   22036 docker.go:217] disabling cri-docker service (if available) ...
	I0625 15:10:53.506427   22036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 15:10:53.518894   22036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 15:10:53.531623   22036 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 15:10:53.650088   22036 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 15:10:53.810017   22036 docker.go:233] disabling docker service ...
	I0625 15:10:53.810088   22036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 15:10:53.824827   22036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 15:10:53.837951   22036 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 15:10:53.966077   22036 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 15:10:54.091760   22036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 15:10:54.105550   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 15:10:54.123657   22036 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0625 15:10:54.123719   22036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:10:54.133562   22036 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 15:10:54.133613   22036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:10:54.143406   22036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:10:54.153118   22036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:10:54.162938   22036 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 15:10:54.172920   22036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:10:54.183323   22036 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:10:54.200120   22036 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:10:54.209921   22036 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 15:10:54.218709   22036 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0625 15:10:54.218755   22036 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0625 15:10:54.230874   22036 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 15:10:54.239670   22036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:10:54.366925   22036 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 15:10:54.493463   22036 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 15:10:54.493542   22036 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 15:10:54.498189   22036 start.go:562] Will wait 60s for crictl version
	I0625 15:10:54.498234   22036 ssh_runner.go:195] Run: which crictl
	I0625 15:10:54.501799   22036 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 15:10:54.552565   22036 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 15:10:54.552687   22036 ssh_runner.go:195] Run: crio --version
	I0625 15:10:54.579526   22036 ssh_runner.go:195] Run: crio --version
	I0625 15:10:54.608935   22036 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0625 15:10:54.610142   22036 main.go:141] libmachine: (addons-739670) Calling .GetIP
	I0625 15:10:54.612710   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:54.613066   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:10:54.613095   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:10:54.613279   22036 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0625 15:10:54.617211   22036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 15:10:54.629698   22036 kubeadm.go:877] updating cluster {Name:addons-739670 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-739670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0625 15:10:54.629803   22036 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 15:10:54.629844   22036 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 15:10:54.660740   22036 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0625 15:10:54.660803   22036 ssh_runner.go:195] Run: which lz4
	I0625 15:10:54.664542   22036 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0625 15:10:54.668488   22036 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0625 15:10:54.668516   22036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0625 15:10:55.978759   22036 crio.go:462] duration metric: took 1.314256899s to copy over tarball
	I0625 15:10:55.978823   22036 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0625 15:10:58.128171   22036 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.149323375s)
	I0625 15:10:58.128213   22036 crio.go:469] duration metric: took 2.149426771s to extract the tarball
	I0625 15:10:58.128223   22036 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0625 15:10:58.165075   22036 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 15:10:58.205300   22036 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 15:10:58.205323   22036 cache_images.go:84] Images are preloaded, skipping loading
	I0625 15:10:58.205331   22036 kubeadm.go:928] updating node { 192.168.39.224 8443 v1.30.2 crio true true} ...
	I0625 15:10:58.205424   22036 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-739670 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-739670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 15:10:58.205488   22036 ssh_runner.go:195] Run: crio config
	I0625 15:10:58.249055   22036 cni.go:84] Creating CNI manager for ""
	I0625 15:10:58.249083   22036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 15:10:58.249103   22036 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0625 15:10:58.249132   22036 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.224 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-739670 NodeName:addons-739670 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0625 15:10:58.249308   22036 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.224
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-739670"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0625 15:10:58.249384   22036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0625 15:10:58.258975   22036 binaries.go:44] Found k8s binaries, skipping transfer
	I0625 15:10:58.259033   22036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0625 15:10:58.268067   22036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0625 15:10:58.284237   22036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 15:10:58.300035   22036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0625 15:10:58.316293   22036 ssh_runner.go:195] Run: grep 192.168.39.224	control-plane.minikube.internal$ /etc/hosts
	I0625 15:10:58.319863   22036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 15:10:58.331110   22036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:10:58.460691   22036 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 15:10:58.477678   22036 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670 for IP: 192.168.39.224
	I0625 15:10:58.477703   22036 certs.go:194] generating shared ca certs ...
	I0625 15:10:58.477722   22036 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:10:58.477861   22036 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 15:10:58.556777   22036 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt ...
	I0625 15:10:58.556802   22036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt: {Name:mk2c0f5eddd3e5693934fbb925fea46b6d27f727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:10:58.556999   22036 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key ...
	I0625 15:10:58.557014   22036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key: {Name:mk3156ee412331024142a93fdf4d6af61fb99db6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:10:58.557105   22036 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 15:10:58.756796   22036 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt ...
	I0625 15:10:58.756823   22036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt: {Name:mk508225089874a5f201b5b07b120a41a61603dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:10:58.756963   22036 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key ...
	I0625 15:10:58.756973   22036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key: {Name:mk54a5f4d743b75e9184179de0269d45a7ca8e38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:10:58.757037   22036 certs.go:256] generating profile certs ...
	I0625 15:10:58.757092   22036 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/client.key
	I0625 15:10:58.757105   22036 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/client.crt with IP's: []
	I0625 15:10:58.973196   22036 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/client.crt ...
	I0625 15:10:58.973226   22036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/client.crt: {Name:mk2f35675eeef846e2f910636f190f8fed8b1e01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:10:58.973414   22036 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/client.key ...
	I0625 15:10:58.973431   22036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/client.key: {Name:mk6f76bbe58994ddbd215e2983216bfa874824b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:10:58.973526   22036 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/apiserver.key.d0f1fd8c
	I0625 15:10:58.973551   22036 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/apiserver.crt.d0f1fd8c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.224]
	I0625 15:10:59.197641   22036 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/apiserver.crt.d0f1fd8c ...
	I0625 15:10:59.197669   22036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/apiserver.crt.d0f1fd8c: {Name:mkbb2e6d55cf870d07be23030d5298c3d5a1e790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:10:59.197838   22036 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/apiserver.key.d0f1fd8c ...
	I0625 15:10:59.197857   22036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/apiserver.key.d0f1fd8c: {Name:mk32bb1959d57796fe922f90bb8a532db6ee1811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:10:59.197953   22036 certs.go:381] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/apiserver.crt.d0f1fd8c -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/apiserver.crt
	I0625 15:10:59.198064   22036 certs.go:385] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/apiserver.key.d0f1fd8c -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/apiserver.key
	I0625 15:10:59.198126   22036 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/proxy-client.key
	I0625 15:10:59.198151   22036 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/proxy-client.crt with IP's: []
	I0625 15:10:59.278653   22036 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/proxy-client.crt ...
	I0625 15:10:59.278682   22036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/proxy-client.crt: {Name:mkd9001f15a4aa0ed28737c890d58bc01e1068d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:10:59.278853   22036 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/proxy-client.key ...
	I0625 15:10:59.278908   22036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/proxy-client.key: {Name:mkb20a12290c9fbbf0c85be633dccaf7ed15c105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:10:59.279187   22036 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 15:10:59.279240   22036 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 15:10:59.279276   22036 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 15:10:59.279309   22036 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 15:10:59.280090   22036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 15:10:59.327988   22036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 15:10:59.352463   22036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 15:10:59.374839   22036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 15:10:59.397284   22036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0625 15:10:59.419520   22036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0625 15:10:59.443175   22036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 15:10:59.466586   22036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/addons-739670/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0625 15:10:59.490698   22036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 15:10:59.514747   22036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0625 15:10:59.531352   22036 ssh_runner.go:195] Run: openssl version
	I0625 15:10:59.537171   22036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 15:10:59.547913   22036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:10:59.552556   22036 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:10:59.552602   22036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:10:59.558460   22036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 15:10:59.569182   22036 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 15:10:59.573345   22036 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0625 15:10:59.573435   22036 kubeadm.go:391] StartCluster: {Name:addons-739670 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-739670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 15:10:59.573547   22036 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0625 15:10:59.573586   22036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0625 15:10:59.611626   22036 cri.go:89] found id: ""
	I0625 15:10:59.611701   22036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0625 15:10:59.621965   22036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0625 15:10:59.631882   22036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0625 15:10:59.641307   22036 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0625 15:10:59.641326   22036 kubeadm.go:156] found existing configuration files:
	
	I0625 15:10:59.641367   22036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0625 15:10:59.655965   22036 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0625 15:10:59.656030   22036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0625 15:10:59.666335   22036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0625 15:10:59.675404   22036 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0625 15:10:59.675453   22036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0625 15:10:59.685109   22036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0625 15:10:59.694178   22036 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0625 15:10:59.694219   22036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0625 15:10:59.703613   22036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0625 15:10:59.712877   22036 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0625 15:10:59.712931   22036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0625 15:10:59.722284   22036 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0625 15:10:59.785064   22036 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0625 15:10:59.785185   22036 kubeadm.go:309] [preflight] Running pre-flight checks
	I0625 15:10:59.922487   22036 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0625 15:10:59.922610   22036 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0625 15:10:59.922750   22036 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0625 15:11:00.152344   22036 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0625 15:11:00.445322   22036 out.go:204]   - Generating certificates and keys ...
	I0625 15:11:00.445419   22036 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0625 15:11:00.445522   22036 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0625 15:11:00.445618   22036 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0625 15:11:00.445690   22036 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0625 15:11:00.636935   22036 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0625 15:11:00.879891   22036 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0625 15:11:01.212306   22036 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0625 15:11:01.212620   22036 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-739670 localhost] and IPs [192.168.39.224 127.0.0.1 ::1]
	I0625 15:11:01.296082   22036 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0625 15:11:01.296258   22036 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-739670 localhost] and IPs [192.168.39.224 127.0.0.1 ::1]
	I0625 15:11:01.626585   22036 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0625 15:11:01.792730   22036 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0625 15:11:02.018102   22036 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0625 15:11:02.018198   22036 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0625 15:11:02.098621   22036 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0625 15:11:02.189816   22036 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0625 15:11:02.375401   22036 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0625 15:11:02.422092   22036 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0625 15:11:02.513542   22036 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0625 15:11:02.514143   22036 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0625 15:11:02.516465   22036 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0625 15:11:02.518109   22036 out.go:204]   - Booting up control plane ...
	I0625 15:11:02.518186   22036 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0625 15:11:02.518794   22036 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0625 15:11:02.520472   22036 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0625 15:11:02.537139   22036 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0625 15:11:02.538423   22036 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0625 15:11:02.538528   22036 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0625 15:11:02.669994   22036 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0625 15:11:02.670083   22036 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0625 15:11:03.670869   22036 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001825266s
	I0625 15:11:03.670955   22036 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0625 15:11:09.169750   22036 kubeadm.go:309] [api-check] The API server is healthy after 5.502196979s
	I0625 15:11:09.180716   22036 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0625 15:11:09.200189   22036 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0625 15:11:09.223918   22036 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0625 15:11:09.224135   22036 kubeadm.go:309] [mark-control-plane] Marking the node addons-739670 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0625 15:11:09.238195   22036 kubeadm.go:309] [bootstrap-token] Using token: bhhvp2.5c3cgqw8rgd1ig2z
	I0625 15:11:09.239658   22036 out.go:204]   - Configuring RBAC rules ...
	I0625 15:11:09.239810   22036 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0625 15:11:09.247559   22036 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0625 15:11:09.258345   22036 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0625 15:11:09.264084   22036 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0625 15:11:09.266873   22036 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0625 15:11:09.270327   22036 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0625 15:11:09.577980   22036 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0625 15:11:10.008678   22036 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0625 15:11:10.577525   22036 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0625 15:11:10.578508   22036 kubeadm.go:309] 
	I0625 15:11:10.578585   22036 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0625 15:11:10.578595   22036 kubeadm.go:309] 
	I0625 15:11:10.578726   22036 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0625 15:11:10.578750   22036 kubeadm.go:309] 
	I0625 15:11:10.578800   22036 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0625 15:11:10.578884   22036 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0625 15:11:10.578966   22036 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0625 15:11:10.578981   22036 kubeadm.go:309] 
	I0625 15:11:10.579063   22036 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0625 15:11:10.579072   22036 kubeadm.go:309] 
	I0625 15:11:10.579128   22036 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0625 15:11:10.579136   22036 kubeadm.go:309] 
	I0625 15:11:10.579199   22036 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0625 15:11:10.579279   22036 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0625 15:11:10.579386   22036 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0625 15:11:10.579395   22036 kubeadm.go:309] 
	I0625 15:11:10.579508   22036 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0625 15:11:10.579631   22036 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0625 15:11:10.579641   22036 kubeadm.go:309] 
	I0625 15:11:10.579713   22036 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bhhvp2.5c3cgqw8rgd1ig2z \
	I0625 15:11:10.579801   22036 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:df4523a4334c80aff4a7c2fc7b4a73691744a675a28cdb3d4468287f693ab03d \
	I0625 15:11:10.579822   22036 kubeadm.go:309] 	--control-plane 
	I0625 15:11:10.579829   22036 kubeadm.go:309] 
	I0625 15:11:10.579911   22036 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0625 15:11:10.579918   22036 kubeadm.go:309] 
	I0625 15:11:10.579988   22036 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bhhvp2.5c3cgqw8rgd1ig2z \
	I0625 15:11:10.580074   22036 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:df4523a4334c80aff4a7c2fc7b4a73691744a675a28cdb3d4468287f693ab03d 
	I0625 15:11:10.580483   22036 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0625 15:11:10.580637   22036 cni.go:84] Creating CNI manager for ""
	I0625 15:11:10.580653   22036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 15:11:10.582264   22036 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0625 15:11:10.583450   22036 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0625 15:11:10.598487   22036 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0625 15:11:10.625162   22036 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0625 15:11:10.625254   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:10.625269   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-739670 minikube.k8s.io/updated_at=2024_06_25T15_11_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b minikube.k8s.io/name=addons-739670 minikube.k8s.io/primary=true
	I0625 15:11:10.676937   22036 ops.go:34] apiserver oom_adj: -16
	I0625 15:11:10.811251   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:11.311607   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:11.811930   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:12.311343   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:12.812209   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:13.311960   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:13.812192   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:14.311609   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:14.812285   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:15.311438   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:15.811667   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:16.312134   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:16.811745   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:17.311910   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:17.812031   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:18.312195   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:18.811867   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:19.311297   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:19.811923   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:20.311688   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:20.811705   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:21.311353   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:21.811418   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:22.312237   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:22.811303   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:23.311957   22036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:11:23.426852   22036 kubeadm.go:1107] duration metric: took 12.80166386s to wait for elevateKubeSystemPrivileges
	W0625 15:11:23.426890   22036 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0625 15:11:23.426900   22036 kubeadm.go:393] duration metric: took 23.853510544s to StartCluster
	I0625 15:11:23.426920   22036 settings.go:142] acquiring lock: {Name:mk38d7db80b40da56857d65b8e7da05700cdb9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:11:23.427038   22036 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:11:23.427417   22036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/kubeconfig: {Name:mk71a37176bd7deadd1f1cd3c756fe56f3b0810d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:11:23.427632   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0625 15:11:23.427652   22036 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:11:23.427711   22036 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0625 15:11:23.427819   22036 addons.go:69] Setting yakd=true in profile "addons-739670"
	I0625 15:11:23.427830   22036 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-739670"
	I0625 15:11:23.427857   22036 addons.go:234] Setting addon yakd=true in "addons-739670"
	I0625 15:11:23.427861   22036 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-739670"
	I0625 15:11:23.427869   22036 addons.go:69] Setting ingress=true in profile "addons-739670"
	I0625 15:11:23.427890   22036 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-739670"
	I0625 15:11:23.427897   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.427899   22036 addons.go:69] Setting registry=true in profile "addons-739670"
	I0625 15:11:23.427907   22036 addons.go:234] Setting addon ingress=true in "addons-739670"
	I0625 15:11:23.427916   22036 addons.go:234] Setting addon registry=true in "addons-739670"
	I0625 15:11:23.427919   22036 addons.go:69] Setting inspektor-gadget=true in profile "addons-739670"
	I0625 15:11:23.427932   22036 addons.go:69] Setting cloud-spanner=true in profile "addons-739670"
	I0625 15:11:23.427940   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.427950   22036 addons.go:234] Setting addon cloud-spanner=true in "addons-739670"
	I0625 15:11:23.427955   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.427957   22036 addons.go:234] Setting addon inspektor-gadget=true in "addons-739670"
	I0625 15:11:23.427967   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.427984   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.428020   22036 addons.go:69] Setting storage-provisioner=true in profile "addons-739670"
	I0625 15:11:23.428040   22036 addons.go:234] Setting addon storage-provisioner=true in "addons-739670"
	I0625 15:11:23.428062   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.428319   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.428367   22036 addons.go:69] Setting default-storageclass=true in profile "addons-739670"
	I0625 15:11:23.428382   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.428387   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.428397   22036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-739670"
	I0625 15:11:23.428419   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.428437   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.428468   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.427890   22036 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-739670"
	I0625 15:11:23.427905   22036 config.go:182] Loaded profile config "addons-739670": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:11:23.428336   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.427926   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.428511   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.428346   22036 addons.go:69] Setting metrics-server=true in profile "addons-739670"
	I0625 15:11:23.428567   22036 addons.go:234] Setting addon metrics-server=true in "addons-739670"
	I0625 15:11:23.428351   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.428590   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.428355   22036 addons.go:69] Setting volumesnapshots=true in profile "addons-739670"
	I0625 15:11:23.428642   22036 addons.go:234] Setting addon volumesnapshots=true in "addons-739670"
	I0625 15:11:23.428360   22036 addons.go:69] Setting volcano=true in profile "addons-739670"
	I0625 15:11:23.428664   22036 addons.go:234] Setting addon volcano=true in "addons-739670"
	I0625 15:11:23.427849   22036 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-739670"
	I0625 15:11:23.427864   22036 addons.go:69] Setting helm-tiller=true in profile "addons-739670"
	I0625 15:11:23.428684   22036 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-739670"
	I0625 15:11:23.428362   22036 addons.go:69] Setting ingress-dns=true in profile "addons-739670"
	I0625 15:11:23.428702   22036 addons.go:234] Setting addon helm-tiller=true in "addons-739670"
	I0625 15:11:23.428705   22036 addons.go:234] Setting addon ingress-dns=true in "addons-739670"
	I0625 15:11:23.428774   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.428790   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.428822   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.428836   22036 addons.go:69] Setting gcp-auth=true in profile "addons-739670"
	I0625 15:11:23.428849   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.428856   22036 mustload.go:65] Loading cluster: addons-739670
	I0625 15:11:23.428900   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.429071   22036 config.go:182] Loaded profile config "addons-739670": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:11:23.428775   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.429105   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.429134   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.429209   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.429227   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.429272   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.429296   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.429341   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.429366   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.429383   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.429407   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.429452   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.429468   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.429502   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.429535   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.429551   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.429574   22036 out.go:177] * Verifying Kubernetes components...
	I0625 15:11:23.429641   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.429947   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.429962   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.429975   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.429990   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.431314   22036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:11:23.457143   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.457178   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.457238   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0625 15:11:23.457255   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39459
	I0625 15:11:23.457278   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44823
	I0625 15:11:23.457258   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46037
	I0625 15:11:23.457735   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.457744   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.457831   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.457922   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.458228   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.458233   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.458250   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.458251   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.458258   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.458265   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.458605   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.458687   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.458821   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.458836   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.459263   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.459306   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.459508   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.459545   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.459561   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.460123   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.460164   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.460773   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.460945   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37783
	I0625 15:11:23.461360   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.461923   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.461956   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.462011   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.462025   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.462402   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.462968   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.462999   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.474143   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I0625 15:11:23.474530   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
	I0625 15:11:23.475144   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.475499   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.476015   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.476030   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.476376   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.476392   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.476504   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.477113   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.477153   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.477401   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.478022   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.478065   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.481349   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35511
	I0625 15:11:23.481764   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.482240   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.482265   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.482676   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.483200   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.483249   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.491193   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36895
	I0625 15:11:23.491754   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.492313   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.492330   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.492693   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.493250   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.493274   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.500158   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34063
	I0625 15:11:23.500165   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42707
	I0625 15:11:23.500512   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.500623   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.501092   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.501111   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.501241   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.501256   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.501618   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.501667   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.502261   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.502308   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.502937   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.502977   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.504469   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38723
	I0625 15:11:23.508532   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.509011   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.509030   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.509340   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.509551   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.509631   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45349
	I0625 15:11:23.510085   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.510649   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.510668   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.511046   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.511167   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I0625 15:11:23.511507   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.512151   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.512784   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.512800   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.512873   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40119
	I0625 15:11:23.512976   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43977
	I0625 15:11:23.513131   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.513245   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.513340   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.514101   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.514233   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.514247   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.514812   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.514826   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.514884   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.515500   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.515536   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.516267   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.516486   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.516600   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.518761   22036 addons.go:234] Setting addon default-storageclass=true in "addons-739670"
	I0625 15:11:23.518805   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.519174   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.519215   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.520432   22036 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-739670"
	I0625 15:11:23.520467   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.520844   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.520874   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.522236   22036 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0625 15:11:23.523224   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.523750   22036 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0625 15:11:23.523765   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0625 15:11:23.523784   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:23.525940   22036 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0625 15:11:23.527404   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.527620   22036 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0625 15:11:23.527637   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0625 15:11:23.527654   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:23.528794   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:23.529944   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.530343   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:23.530769   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.530992   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:23.531035   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:23.531087   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.531249   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:23.531394   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:23.531441   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:23.531617   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:23.531676   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:23.532102   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:23.532741   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I0625 15:11:23.532864   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42503
	I0625 15:11:23.533188   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.533636   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.533654   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.534069   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.534279   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.535724   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.537409   22036 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0625 15:11:23.537509   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.538012   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.538035   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.538799   22036 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0625 15:11:23.538817   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0625 15:11:23.538835   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:23.541080   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35837
	I0625 15:11:23.541510   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.541959   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.541977   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.542376   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.542512   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.543440   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.543480   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.543844   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.543899   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32795
	I0625 15:11:23.544048   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.544391   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:23.544413   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.544457   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.544980   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.544997   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.545052   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:23.545715   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.545765   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:23.545810   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.546029   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.546094   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I0625 15:11:23.546171   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:23.546369   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:23.546683   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.547623   22036 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0625 15:11:23.548127   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38713
	I0625 15:11:23.548502   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.548518   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.548545   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.548626   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.548838   22036 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0625 15:11:23.548854   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0625 15:11:23.548870   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:23.549518   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.549633   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.549654   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.549974   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.549986   22036 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0625 15:11:23.550044   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.550580   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.550619   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.550974   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43843
	I0625 15:11:23.551351   22036 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0625 15:11:23.551368   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.551370   22036 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0625 15:11:23.551441   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:23.552374   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.552503   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.552526   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.552592   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36857
	I0625 15:11:23.552867   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.553334   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.553357   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.553450   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35883
	I0625 15:11:23.553555   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:23.553570   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.553830   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:23.553872   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.553917   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.553981   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.554279   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:23.554416   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:23.554547   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:23.554810   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.554942   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.554961   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.555125   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.555252   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.555421   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.555477   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:23.555491   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.555525   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.555617   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:23.555746   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:23.555908   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:23.555950   22036 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0625 15:11:23.556069   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:23.556440   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.556499   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.557395   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.557432   22036 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0625 15:11:23.557447   22036 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0625 15:11:23.557464   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:23.558352   22036 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0625 15:11:23.559237   22036 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0625 15:11:23.560171   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I0625 15:11:23.560485   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.560797   22036 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0625 15:11:23.561018   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.561032   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.561102   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.561685   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38875
	I0625 15:11:23.561791   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:23.561800   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:23.561841   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.562200   22036 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0625 15:11:23.563364   22036 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0625 15:11:23.564799   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.564815   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:23.564843   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:23.564851   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:23.564854   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.564866   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:23.564874   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:23.564935   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.565339   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:23.565354   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:23.565672   22036 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0625 15:11:23.565687   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0625 15:11:23.565701   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:23.566012   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.566031   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.566262   22036 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0625 15:11:23.566493   22036 main.go:141] libmachine: () Calling .GetMachineName
	W0625 15:11:23.566606   22036 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0625 15:11:23.566616   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:23.566632   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.566825   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.566874   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42033
	I0625 15:11:23.567034   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:23.567247   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:23.567408   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:23.567570   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:23.568158   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.568652   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.568668   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.569008   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.569185   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.569533   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.569612   22036 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0625 15:11:23.569854   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:23.570282   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.570315   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.570369   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:23.570399   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.570746   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:23.570991   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.571395   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:23.571526   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:23.571670   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:23.572136   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43339
	I0625 15:11:23.572470   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.572592   22036 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0625 15:11:23.572596   22036 out.go:177]   - Using image docker.io/registry:2.8.3
	I0625 15:11:23.573112   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I0625 15:11:23.573163   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.573174   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.573455   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.573602   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.573679   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.574205   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.574223   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.574749   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.574813   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.575185   22036 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0625 15:11:23.575236   22036 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0625 15:11:23.575555   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:23.575593   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:23.575807   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.576651   22036 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0625 15:11:23.576938   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0625 15:11:23.576959   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:23.577366   22036 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0625 15:11:23.578073   22036 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0625 15:11:23.578362   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41619
	I0625 15:11:23.578762   22036 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0625 15:11:23.578767   22036 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0625 15:11:23.578778   22036 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0625 15:11:23.579211   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:23.578891   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.579506   22036 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0625 15:11:23.579518   22036 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0625 15:11:23.579534   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:23.580781   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.580806   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.581240   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.581290   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.581556   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.581618   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:23.581631   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.581763   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:23.581810   22036 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0625 15:11:23.582076   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:23.582327   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:23.582670   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:23.583013   22036 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0625 15:11:23.583030   22036 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0625 15:11:23.583047   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:23.584377   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44395
	I0625 15:11:23.584939   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.585051   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.585240   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.585572   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:23.585593   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.585749   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.585770   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.585749   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:23.585906   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:23.586007   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:23.586109   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:23.586129   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.586297   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.586500   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.586962   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.587699   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:23.587720   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.588035   22036 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0625 15:11:23.588125   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.588214   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:23.588307   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:23.588325   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.588337   22036 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0625 15:11:23.588347   22036 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0625 15:11:23.588360   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:23.588362   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:23.588527   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:23.588585   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:23.588633   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:23.588733   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:23.588864   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:23.589128   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:23.589474   22036 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0625 15:11:23.589487   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0625 15:11:23.589501   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:23.591728   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.592238   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:23.592256   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.592341   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.592617   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:23.592776   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:23.592829   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:23.592848   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.592949   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:23.593072   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:23.593083   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:23.593218   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:23.593355   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:23.593488   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:23.598789   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41917
	I0625 15:11:23.599124   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.599664   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.599686   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.600019   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.600221   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:23.601778   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.602353   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42631
	I0625 15:11:23.602705   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:23.603188   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:23.603207   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:23.603404   22036 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0625 15:11:23.603507   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:23.603686   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:23.605966   22036 out.go:177]   - Using image docker.io/busybox:stable
	I0625 15:11:23.607154   22036 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0625 15:11:23.607166   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0625 15:11:23.607178   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:23.610006   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.610512   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:23.610534   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:23.610701   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:23.610853   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:23.610990   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:23.611125   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:24.032862   22036 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0625 15:11:24.032893   22036 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0625 15:11:24.045820   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0625 15:11:24.072561   22036 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0625 15:11:24.072585   22036 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0625 15:11:24.098277   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0625 15:11:24.126765   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0625 15:11:24.145523   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0625 15:11:24.148374   22036 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0625 15:11:24.148395   22036 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0625 15:11:24.182555   22036 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0625 15:11:24.182583   22036 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0625 15:11:24.185571   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0625 15:11:24.203719   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0625 15:11:24.209110   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0625 15:11:24.217426   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0625 15:11:24.247099   22036 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0625 15:11:24.247122   22036 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0625 15:11:24.247267   22036 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0625 15:11:24.247290   22036 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0625 15:11:24.247506   22036 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0625 15:11:24.247519   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0625 15:11:24.275971   22036 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0625 15:11:24.275997   22036 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0625 15:11:24.337620   22036 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0625 15:11:24.337640   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0625 15:11:24.365513   22036 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0625 15:11:24.365545   22036 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0625 15:11:24.370950   22036 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 15:11:24.370953   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0625 15:11:24.399122   22036 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0625 15:11:24.399145   22036 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0625 15:11:24.441670   22036 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0625 15:11:24.441694   22036 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0625 15:11:24.487503   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0625 15:11:24.515684   22036 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0625 15:11:24.515704   22036 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0625 15:11:24.550593   22036 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0625 15:11:24.550616   22036 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0625 15:11:24.690931   22036 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0625 15:11:24.690956   22036 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0625 15:11:24.722099   22036 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0625 15:11:24.722131   22036 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0625 15:11:24.733954   22036 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0625 15:11:24.733988   22036 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0625 15:11:24.745866   22036 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0625 15:11:24.745895   22036 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0625 15:11:24.753104   22036 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0625 15:11:24.753129   22036 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0625 15:11:24.890963   22036 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0625 15:11:24.890990   22036 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0625 15:11:24.950376   22036 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0625 15:11:24.950405   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0625 15:11:24.983041   22036 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0625 15:11:24.983073   22036 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0625 15:11:25.073698   22036 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0625 15:11:25.073721   22036 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0625 15:11:25.106848   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0625 15:11:25.140712   22036 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0625 15:11:25.140735   22036 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0625 15:11:25.232811   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0625 15:11:25.239104   22036 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0625 15:11:25.239133   22036 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0625 15:11:25.418401   22036 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0625 15:11:25.418433   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0625 15:11:25.446883   22036 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0625 15:11:25.446910   22036 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0625 15:11:25.593329   22036 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0625 15:11:25.593358   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0625 15:11:25.616386   22036 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0625 15:11:25.616409   22036 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0625 15:11:25.793201   22036 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0625 15:11:25.793225   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0625 15:11:25.863680   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0625 15:11:25.911461   22036 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0625 15:11:25.911481   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0625 15:11:26.206795   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0625 15:11:26.359249   22036 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0625 15:11:26.359274   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0625 15:11:26.762323   22036 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0625 15:11:26.762348   22036 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0625 15:11:26.876826   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0625 15:11:27.908633   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.862777991s)
	I0625 15:11:27.908687   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:27.908699   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:27.908700   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.810389437s)
	I0625 15:11:27.908740   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:27.908758   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:27.909028   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:27.909042   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:27.909050   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:27.909058   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:27.909141   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:27.909168   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:27.909145   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:27.909199   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:27.909213   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:27.909224   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:27.909268   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:27.909280   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:27.909280   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:27.909547   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:27.909569   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:30.630158   22036 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0625 15:11:30.630203   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:30.633592   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:30.633971   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:30.634005   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:30.634202   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:30.634434   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:30.634613   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:30.634758   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:31.061396   22036 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0625 15:11:31.192493   22036 addons.go:234] Setting addon gcp-auth=true in "addons-739670"
	I0625 15:11:31.192541   22036 host.go:66] Checking if "addons-739670" exists ...
	I0625 15:11:31.192863   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:31.192894   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:31.208130   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0625 15:11:31.208611   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:31.209095   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:31.209113   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:31.209425   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:31.210008   22036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:11:31.210053   22036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:11:31.226893   22036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46097
	I0625 15:11:31.227339   22036 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:11:31.227830   22036 main.go:141] libmachine: Using API Version  1
	I0625 15:11:31.227850   22036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:11:31.228139   22036 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:11:31.228354   22036 main.go:141] libmachine: (addons-739670) Calling .GetState
	I0625 15:11:31.229947   22036 main.go:141] libmachine: (addons-739670) Calling .DriverName
	I0625 15:11:31.230176   22036 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0625 15:11:31.230197   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHHostname
	I0625 15:11:31.232586   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:31.232993   22036 main.go:141] libmachine: (addons-739670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:31:7b", ip: ""} in network mk-addons-739670: {Iface:virbr1 ExpiryTime:2024-06-25 16:10:46 +0000 UTC Type:0 Mac:52:54:00:96:31:7b Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-739670 Clientid:01:52:54:00:96:31:7b}
	I0625 15:11:31.233023   22036 main.go:141] libmachine: (addons-739670) DBG | domain addons-739670 has defined IP address 192.168.39.224 and MAC address 52:54:00:96:31:7b in network mk-addons-739670
	I0625 15:11:31.233160   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHPort
	I0625 15:11:31.233372   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHKeyPath
	I0625 15:11:31.233540   22036 main.go:141] libmachine: (addons-739670) Calling .GetSSHUsername
	I0625 15:11:31.233706   22036 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/addons-739670/id_rsa Username:docker}
	I0625 15:11:32.323404   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.19660146s)
	I0625 15:11:32.323450   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.323453   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.177901492s)
	I0625 15:11:32.323462   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.323508   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.13791691s)
	I0625 15:11:32.323533   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.323546   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.323492   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.323564   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.323587   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.114456953s)
	I0625 15:11:32.323529   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.119788527s)
	I0625 15:11:32.323605   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.323609   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.323615   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.323618   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.323635   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.106183794s)
	I0625 15:11:32.323666   22036 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.952687322s)
	I0625 15:11:32.323698   22036 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.952693817s)
	I0625 15:11:32.323712   22036 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0625 15:11:32.323988   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.8364561s)
	I0625 15:11:32.324012   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.324026   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.324025   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.324051   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.324068   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.324099   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.324106   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.324114   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.324122   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.324138   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.217265402s)
	I0625 15:11:32.324154   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.324165   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.324227   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.091387753s)
	I0625 15:11:32.324239   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.324246   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.324356   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.460647627s)
	W0625 15:11:32.324379   22036 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0625 15:11:32.323667   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.324418   22036 retry.go:31] will retry after 276.61762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0625 15:11:32.324427   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.324455   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.324492   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.324497   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.117673313s)
	I0625 15:11:32.324500   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.324503   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.324512   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.324514   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.324521   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.324523   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.324583   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.324590   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.324597   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.324603   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.324913   22036 node_ready.go:35] waiting up to 6m0s for node "addons-739670" to be "Ready" ...
	I0625 15:11:32.325048   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.325078   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.325086   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.325093   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.325100   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.325133   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.325145   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.325162   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.325165   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.325170   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.325173   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.325178   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.325183   22036 addons.go:475] Verifying addon ingress=true in "addons-739670"
	I0625 15:11:32.325750   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.325761   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.325769   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.325777   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.326017   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.326030   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.326039   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.326047   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.326097   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.326117   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.326124   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.326131   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.326139   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.326255   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.326274   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.326281   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.326362   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.326368   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.326559   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.326582   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.326588   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.326744   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.326773   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.326779   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.326787   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.326794   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.327557   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.327583   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.327589   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.327598   22036 addons.go:475] Verifying addon registry=true in "addons-739670"
	I0625 15:11:32.327732   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.327740   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.327744   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.327833   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.327842   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.327850   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.327857   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.327922   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.327992   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.327999   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.328124   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.328147   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.328154   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.325185   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.328375   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.328415   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.328436   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.329090   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.329097   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.329117   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.329126   22036 addons.go:475] Verifying addon metrics-server=true in "addons-739670"
	I0625 15:11:32.330434   22036 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-739670 service yakd-dashboard -n yakd-dashboard
	
	I0625 15:11:32.330491   22036 out.go:177] * Verifying registry addon...
	I0625 15:11:32.330743   22036 out.go:177] * Verifying ingress addon...
	I0625 15:11:32.332855   22036 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0625 15:11:32.333129   22036 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0625 15:11:32.357475   22036 node_ready.go:49] node "addons-739670" has status "Ready":"True"
	I0625 15:11:32.357506   22036 node_ready.go:38] duration metric: took 32.57475ms for node "addons-739670" to be "Ready" ...
	I0625 15:11:32.357518   22036 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 15:11:32.372569   22036 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0625 15:11:32.372591   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:32.372739   22036 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0625 15:11:32.372764   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:32.387642   22036 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p5fcd" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:32.393209   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.393237   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.393490   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.393508   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:32.393643   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:32.393664   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:32.393901   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:32.393935   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:32.393943   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	W0625 15:11:32.394032   22036 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0625 15:11:32.454929   22036 pod_ready.go:92] pod "coredns-7db6d8ff4d-p5fcd" in "kube-system" namespace has status "Ready":"True"
	I0625 15:11:32.454951   22036 pod_ready.go:81] duration metric: took 67.280893ms for pod "coredns-7db6d8ff4d-p5fcd" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:32.454961   22036 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v82sg" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:32.486588   22036 pod_ready.go:92] pod "coredns-7db6d8ff4d-v82sg" in "kube-system" namespace has status "Ready":"True"
	I0625 15:11:32.486610   22036 pod_ready.go:81] duration metric: took 31.64315ms for pod "coredns-7db6d8ff4d-v82sg" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:32.486619   22036 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-739670" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:32.534420   22036 pod_ready.go:92] pod "etcd-addons-739670" in "kube-system" namespace has status "Ready":"True"
	I0625 15:11:32.534459   22036 pod_ready.go:81] duration metric: took 47.832278ms for pod "etcd-addons-739670" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:32.534487   22036 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-739670" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:32.553033   22036 pod_ready.go:92] pod "kube-apiserver-addons-739670" in "kube-system" namespace has status "Ready":"True"
	I0625 15:11:32.553059   22036 pod_ready.go:81] duration metric: took 18.564071ms for pod "kube-apiserver-addons-739670" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:32.553071   22036 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-739670" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:32.602112   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0625 15:11:32.728973   22036 pod_ready.go:92] pod "kube-controller-manager-addons-739670" in "kube-system" namespace has status "Ready":"True"
	I0625 15:11:32.728996   22036 pod_ready.go:81] duration metric: took 175.916906ms for pod "kube-controller-manager-addons-739670" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:32.729007   22036 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pd5fr" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:32.828556   22036 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-739670" context rescaled to 1 replicas
	I0625 15:11:32.845691   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:32.854144   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:33.128695   22036 pod_ready.go:92] pod "kube-proxy-pd5fr" in "kube-system" namespace has status "Ready":"True"
	I0625 15:11:33.128716   22036 pod_ready.go:81] duration metric: took 399.703794ms for pod "kube-proxy-pd5fr" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:33.128725   22036 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-739670" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:33.354669   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:33.355007   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:33.465624   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.588741936s)
	I0625 15:11:33.465676   22036 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.235483255s)
	I0625 15:11:33.465678   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:33.465691   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:33.465990   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:33.466010   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:33.466026   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:33.466041   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:33.466281   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:33.466330   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:33.466342   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:33.466353   22036 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-739670"
	I0625 15:11:33.467483   22036 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0625 15:11:33.467536   22036 out.go:177] * Verifying csi-hostpath-driver addon...
	I0625 15:11:33.468971   22036 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0625 15:11:33.469670   22036 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0625 15:11:33.470033   22036 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0625 15:11:33.470050   22036 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0625 15:11:33.512595   22036 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0625 15:11:33.512624   22036 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0625 15:11:33.554604   22036 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0625 15:11:33.554631   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:33.557900   22036 pod_ready.go:92] pod "kube-scheduler-addons-739670" in "kube-system" namespace has status "Ready":"True"
	I0625 15:11:33.557917   22036 pod_ready.go:81] duration metric: took 429.186609ms for pod "kube-scheduler-addons-739670" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:33.557932   22036 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace to be "Ready" ...
	I0625 15:11:33.591432   22036 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0625 15:11:33.591459   22036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0625 15:11:33.623277   22036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0625 15:11:33.850315   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:33.857237   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:33.979867   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:34.342090   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:34.348127   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:34.475351   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:34.533864   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.931706289s)
	I0625 15:11:34.533915   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:34.533930   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:34.534204   22036 main.go:141] libmachine: (addons-739670) DBG | Closing plugin on server side
	I0625 15:11:34.534259   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:34.534275   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:34.534287   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:34.534299   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:34.534551   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:34.534577   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:34.841631   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:34.843389   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:35.008698   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:35.103179   22036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.479866749s)
	I0625 15:11:35.103231   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:35.103248   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:35.103553   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:35.103574   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:35.103582   22036 main.go:141] libmachine: Making call to close driver server
	I0625 15:11:35.103589   22036 main.go:141] libmachine: (addons-739670) Calling .Close
	I0625 15:11:35.103787   22036 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:11:35.103801   22036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:11:35.105714   22036 addons.go:475] Verifying addon gcp-auth=true in "addons-739670"
	I0625 15:11:35.108509   22036 out.go:177] * Verifying gcp-auth addon...
	I0625 15:11:35.110353   22036 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0625 15:11:35.120381   22036 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0625 15:11:35.120397   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:35.344926   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:35.345420   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:35.477218   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:35.564537   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:11:35.614953   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:35.838374   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:35.838827   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:35.975478   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:36.117249   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:36.337515   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:36.338352   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:36.475532   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:36.615923   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:36.840440   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:36.857667   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:36.975567   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:37.115188   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:37.338998   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:37.339410   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:37.475631   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:37.614937   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:37.837388   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:37.837508   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:37.975560   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:38.064682   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:11:38.114189   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:38.337521   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:38.342743   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:38.757832   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:38.758349   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:38.838740   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:38.839636   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:38.975062   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:39.115674   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:39.338677   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:39.338883   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:39.475527   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:39.614299   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:39.840492   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:39.841583   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:39.975620   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:40.113780   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:40.340063   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:40.342698   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:40.475396   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:40.563952   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:11:40.615197   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:41.280300   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:41.281609   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:41.286058   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:41.294013   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:41.338931   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:41.339069   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:41.476120   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:41.615223   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:41.838601   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:41.838940   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:41.975929   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:42.116392   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:42.338091   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:42.338994   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:42.478028   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:42.564065   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:11:42.614249   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:42.838776   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:42.841225   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:42.982405   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:43.118485   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:43.340571   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:43.341640   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:43.478071   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:43.616408   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:43.839097   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:43.839103   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:43.976353   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:44.114208   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:44.337830   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:44.344159   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:44.475712   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:44.567399   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:11:44.615233   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:44.838417   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:44.838700   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:44.975047   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:45.115782   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:45.339843   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:45.340017   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:45.475767   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:45.615559   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:45.838381   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:45.838581   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:45.975512   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:46.114405   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:46.338195   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:46.340136   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:46.475268   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:46.615265   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:47.129632   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:47.130000   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:47.130350   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:47.130743   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:47.132645   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:11:47.338645   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:47.338789   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:47.476014   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:47.616481   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:47.838018   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:47.838023   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:47.974583   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:48.113786   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:48.338398   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:48.338502   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:48.477002   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:48.614115   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:48.838701   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:48.840064   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:48.975464   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:49.114894   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:49.339044   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:49.339319   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:49.475793   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:49.565888   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:11:49.614746   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:49.839095   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:49.840145   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:49.976160   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:50.114721   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:50.337506   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:50.338746   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:50.475720   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:50.615220   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:50.838695   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:50.838806   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:50.975346   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:51.116353   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:51.338626   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:51.339940   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:51.475425   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:51.567159   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:11:51.614320   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:51.838024   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:51.838041   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:51.975715   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:52.116903   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:52.338875   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:52.339261   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:52.476728   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:52.613990   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:52.840816   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:52.840926   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:52.978545   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:53.113995   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:53.373286   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:53.374702   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:53.474793   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:53.613452   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:53.838318   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:53.839444   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:53.975170   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:54.064117   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:11:54.115147   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:54.338640   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:54.338776   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:54.475853   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:54.618446   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:54.838568   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:54.838674   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:54.975732   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:55.113742   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:55.338488   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:55.338666   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:55.479852   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:55.614541   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:55.839029   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:55.840628   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:55.982351   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:56.114500   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:56.337560   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:56.337994   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:56.475620   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:56.565893   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:11:56.615090   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:56.837762   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:56.839333   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:56.975076   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:57.115751   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:57.340219   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:57.341113   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:57.483738   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:57.617263   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:57.837724   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:57.838131   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:57.976137   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:58.113521   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:58.337683   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:58.339253   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:58.475383   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:58.613982   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:58.836960   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:58.839104   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:58.975813   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:59.065328   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:11:59.114368   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:59.496302   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:59.496737   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:11:59.498245   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:59.698423   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:11:59.838164   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:11:59.838898   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:11:59.974928   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:00.115402   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:00.339747   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:00.339894   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:00.476101   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:00.613625   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:00.839676   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:00.840273   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:00.975642   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:01.065984   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:01.113943   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:01.337382   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:01.337679   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:01.476376   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:01.613994   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:01.837053   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:01.838440   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:01.978384   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:02.113918   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:02.341550   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:02.343413   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:02.475169   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:02.616114   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:02.842191   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:02.843638   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:02.974783   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:03.114158   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:03.340360   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:03.340957   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:03.478234   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:03.564671   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:03.614493   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:03.837519   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:03.838365   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:03.982313   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:04.114349   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:04.338215   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:04.339014   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:04.474461   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:04.655494   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:04.862359   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:04.871506   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:04.975627   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:05.115024   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:05.338879   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:05.340482   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:05.475803   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:05.615243   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:05.851275   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:05.853577   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:05.981129   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:06.066487   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:06.116173   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:06.338193   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:06.338795   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:06.474529   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:06.614362   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:06.840811   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:06.841559   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:06.975207   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:07.113952   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:07.337279   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:07.337564   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:07.475307   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:07.620364   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:07.838066   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:07.838334   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:07.975608   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:08.114464   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:08.338408   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:08.339203   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:08.475524   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:08.563441   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:08.613966   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:08.839779   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:08.839946   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:08.975775   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:09.126881   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:09.337993   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:09.338243   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:09.474888   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:09.614370   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:09.839575   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:09.840459   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:09.979968   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:10.113073   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:10.338894   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:10.339005   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:10.475514   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:10.564423   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:10.614453   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:10.838513   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:10.840387   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:10.974353   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:11.114103   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:11.339294   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:11.339612   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:11.475255   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:11.613319   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:11.840468   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:11.840753   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:12.032196   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:12.114659   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:12.339804   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:12.344878   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:12.475376   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:12.614230   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:12.837922   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:12.837985   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:12.975899   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:13.063785   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:13.114096   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:13.336946   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:13.338643   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:13.475113   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:13.613999   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:13.837636   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:13.841471   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:13.977140   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:14.119541   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:14.339735   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:14.341547   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:14.475853   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:14.613550   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:14.839668   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:14.839805   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:14.975689   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:15.064025   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:15.115234   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:15.339284   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:15.342341   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:15.475415   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:15.613749   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:15.838902   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:15.839413   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:15.975453   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:16.114490   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:16.337724   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:16.338379   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:16.480733   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:16.613851   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:16.837504   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:16.838373   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:16.975225   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:17.064994   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:17.114003   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:17.337928   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:17.338829   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:17.476566   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:17.615292   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:17.837876   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:17.838359   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:17.975527   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:18.116889   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:18.339446   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:18.340100   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:18.474956   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:18.615611   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:18.839388   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:18.839742   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:18.975252   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:19.065391   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:19.116343   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:19.338886   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0625 15:12:19.340060   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:19.475899   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:19.615133   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:19.838127   22036 kapi.go:107] duration metric: took 47.504997383s to wait for kubernetes.io/minikube-addons=registry ...
	I0625 15:12:19.840113   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:19.975460   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:20.114037   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:20.338425   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:20.474666   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:20.613950   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:20.837774   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:20.975280   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:21.070443   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:21.114341   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:21.338062   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:21.475428   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:21.614361   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:21.837574   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:21.976256   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:22.115096   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:22.336996   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:22.476954   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:22.614005   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:22.838418   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:22.974827   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:23.113445   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:23.337444   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:23.476433   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:23.564692   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:23.614252   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:23.836835   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:23.975634   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:24.114267   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:24.336994   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:24.475981   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:24.615471   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:24.838139   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:24.975442   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:25.114230   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:25.337225   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:25.476017   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:25.575208   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:25.614227   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:25.837342   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:25.976740   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:26.114622   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:26.337522   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:26.475238   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:26.613596   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:26.837854   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:26.975155   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:27.114369   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:27.338357   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:27.476888   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:27.614199   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:27.837176   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:27.975544   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:28.066201   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:28.113977   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:28.338368   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:28.491938   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:28.619710   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:28.837503   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:28.975464   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:29.114157   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:29.587760   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:29.588556   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:29.614729   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:29.838025   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:29.975761   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:30.117921   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:30.338057   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:30.476736   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:30.566098   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:30.613886   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:30.838023   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:30.975172   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:31.113594   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:31.338496   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:31.475471   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:31.614961   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:31.836803   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:31.974317   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:32.115009   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:32.338129   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:32.478813   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:32.614041   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:32.837004   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:32.975604   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:33.064542   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:33.114909   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:33.337248   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:33.475955   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:33.613796   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:33.837757   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:33.975066   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:34.114287   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:34.337775   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:34.478562   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:34.614093   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:34.837048   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:34.975380   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:35.114203   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:35.337771   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:35.475737   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:35.563293   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:35.616092   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:35.837275   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:35.975260   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:36.114755   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:36.337597   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:36.475842   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:36.613467   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:36.837691   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:36.976324   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:37.114549   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:37.338124   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:37.476160   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:37.614427   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:37.838273   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:37.975605   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:38.064399   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:38.115252   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:38.338849   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:38.475817   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:38.617060   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:38.884482   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:38.976051   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:39.114454   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:39.337571   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:39.476233   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:39.613704   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:39.837667   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:39.982394   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:40.069657   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:40.113888   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:40.337506   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:40.476173   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:40.615440   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:40.837504   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:40.975023   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:41.115458   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:41.340706   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:41.480763   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:41.613968   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:41.837976   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:41.974980   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:42.114139   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:42.337264   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:42.475782   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:42.564604   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:42.614297   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:42.837225   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:42.974341   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:43.113552   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:43.337768   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:43.478534   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:43.614448   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:43.838179   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:43.975614   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:44.118240   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:44.337846   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:44.474548   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:44.613997   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:44.837717   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:44.975248   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:45.065535   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:45.114712   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:45.337678   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:45.727381   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:45.729613   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:45.931280   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:45.974997   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:46.114741   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:46.337424   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:46.475950   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:46.614171   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:46.836981   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:46.976291   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:47.115992   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:47.338689   22036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0625 15:12:47.483304   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:47.564058   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:47.614596   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:47.838224   22036 kapi.go:107] duration metric: took 1m15.50536631s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0625 15:12:47.976710   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:48.114766   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:48.474883   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:48.614814   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:48.975027   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:49.123770   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:49.474615   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:49.564488   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:49.614475   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:49.975830   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:50.114603   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:50.475361   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:50.614119   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0625 15:12:50.975507   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:51.116093   22036 kapi.go:107] duration metric: took 1m16.0057383s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0625 15:12:51.118110   22036 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-739670 cluster.
	I0625 15:12:51.119573   22036 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0625 15:12:51.120966   22036 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0625 15:12:51.475613   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:51.565201   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:51.975621   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:52.485426   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:52.975397   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:53.475528   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:53.975290   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:54.064305   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:54.478161   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:54.976181   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:55.475327   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:55.975639   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:56.476123   22036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0625 15:12:56.564464   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:12:56.974832   22036 kapi.go:107] duration metric: took 1m23.50515879s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0625 15:12:56.976338   22036 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, helm-tiller, ingress-dns, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0625 15:12:56.977634   22036 addons.go:510] duration metric: took 1m33.549929081s for enable addons: enabled=[storage-provisioner cloud-spanner helm-tiller ingress-dns nvidia-device-plugin inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0625 15:12:58.564585   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:13:00.567232   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:13:03.064604   22036 pod_ready.go:102] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"False"
	I0625 15:13:04.064531   22036 pod_ready.go:92] pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace has status "Ready":"True"
	I0625 15:13:04.064556   22036 pod_ready.go:81] duration metric: took 1m30.50660967s for pod "metrics-server-c59844bb4-h5242" in "kube-system" namespace to be "Ready" ...
	I0625 15:13:04.064569   22036 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-z5vv5" in "kube-system" namespace to be "Ready" ...
	I0625 15:13:04.073680   22036 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-z5vv5" in "kube-system" namespace has status "Ready":"True"
	I0625 15:13:04.073707   22036 pod_ready.go:81] duration metric: took 9.128568ms for pod "nvidia-device-plugin-daemonset-z5vv5" in "kube-system" namespace to be "Ready" ...
	I0625 15:13:04.073732   22036 pod_ready.go:38] duration metric: took 1m31.716199216s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 15:13:04.073754   22036 api_server.go:52] waiting for apiserver process to appear ...
	I0625 15:13:04.073791   22036 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0625 15:13:04.073855   22036 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0625 15:13:04.122227   22036 cri.go:89] found id: "6eac4596c6385aa7ddbb6ae3c9b8f3f3d8ba869a98304b46e9da40d81ecf5451"
	I0625 15:13:04.122249   22036 cri.go:89] found id: ""
	I0625 15:13:04.122259   22036 logs.go:276] 1 containers: [6eac4596c6385aa7ddbb6ae3c9b8f3f3d8ba869a98304b46e9da40d81ecf5451]
	I0625 15:13:04.122310   22036 ssh_runner.go:195] Run: which crictl
	I0625 15:13:04.127092   22036 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0625 15:13:04.127162   22036 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0625 15:13:04.166353   22036 cri.go:89] found id: "82fb38e37083c02fb84598409849737676746162308eb990930a9774c65406c2"
	I0625 15:13:04.166377   22036 cri.go:89] found id: ""
	I0625 15:13:04.166385   22036 logs.go:276] 1 containers: [82fb38e37083c02fb84598409849737676746162308eb990930a9774c65406c2]
	I0625 15:13:04.166433   22036 ssh_runner.go:195] Run: which crictl
	I0625 15:13:04.171467   22036 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0625 15:13:04.171532   22036 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0625 15:13:04.211834   22036 cri.go:89] found id: "fed088c24fe61a44322ee3c5960565457b767a3bf79b1391f56be7500bc6c932"
	I0625 15:13:04.211856   22036 cri.go:89] found id: ""
	I0625 15:13:04.211865   22036 logs.go:276] 1 containers: [fed088c24fe61a44322ee3c5960565457b767a3bf79b1391f56be7500bc6c932]
	I0625 15:13:04.211924   22036 ssh_runner.go:195] Run: which crictl
	I0625 15:13:04.216322   22036 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0625 15:13:04.216376   22036 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0625 15:13:04.260234   22036 cri.go:89] found id: "1606383c30eff93fd4aeae6f8d7357f8f087edf9c0fee53060b5a8a4ce8d6cb9"
	I0625 15:13:04.260272   22036 cri.go:89] found id: ""
	I0625 15:13:04.260282   22036 logs.go:276] 1 containers: [1606383c30eff93fd4aeae6f8d7357f8f087edf9c0fee53060b5a8a4ce8d6cb9]
	I0625 15:13:04.260338   22036 ssh_runner.go:195] Run: which crictl
	I0625 15:13:04.265631   22036 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0625 15:13:04.265705   22036 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0625 15:13:04.313610   22036 cri.go:89] found id: "ded6970fc6788deeed0a1acc35b0de4645b3441748f4ff9237b15fd846638ec4"
	I0625 15:13:04.313633   22036 cri.go:89] found id: ""
	I0625 15:13:04.313643   22036 logs.go:276] 1 containers: [ded6970fc6788deeed0a1acc35b0de4645b3441748f4ff9237b15fd846638ec4]
	I0625 15:13:04.313689   22036 ssh_runner.go:195] Run: which crictl
	I0625 15:13:04.318008   22036 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0625 15:13:04.318058   22036 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0625 15:13:04.358518   22036 cri.go:89] found id: "4e062b5c6442a0590e72fc2a31f311d2f5a020fc3ef3527a120cab06bbcf12a3"
	I0625 15:13:04.358545   22036 cri.go:89] found id: ""
	I0625 15:13:04.358554   22036 logs.go:276] 1 containers: [4e062b5c6442a0590e72fc2a31f311d2f5a020fc3ef3527a120cab06bbcf12a3]
	I0625 15:13:04.358612   22036 ssh_runner.go:195] Run: which crictl
	I0625 15:13:04.362817   22036 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0625 15:13:04.362873   22036 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0625 15:13:04.409213   22036 cri.go:89] found id: ""
	I0625 15:13:04.409240   22036 logs.go:276] 0 containers: []
	W0625 15:13:04.409248   22036 logs.go:278] No container was found matching "kindnet"
	I0625 15:13:04.409257   22036 logs.go:123] Gathering logs for coredns [fed088c24fe61a44322ee3c5960565457b767a3bf79b1391f56be7500bc6c932] ...
	I0625 15:13:04.409268   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed088c24fe61a44322ee3c5960565457b767a3bf79b1391f56be7500bc6c932"
	I0625 15:13:04.446941   22036 logs.go:123] Gathering logs for kube-proxy [ded6970fc6788deeed0a1acc35b0de4645b3441748f4ff9237b15fd846638ec4] ...
	I0625 15:13:04.446968   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ded6970fc6788deeed0a1acc35b0de4645b3441748f4ff9237b15fd846638ec4"
	I0625 15:13:04.488649   22036 logs.go:123] Gathering logs for kube-controller-manager [4e062b5c6442a0590e72fc2a31f311d2f5a020fc3ef3527a120cab06bbcf12a3] ...
	I0625 15:13:04.488675   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e062b5c6442a0590e72fc2a31f311d2f5a020fc3ef3527a120cab06bbcf12a3"
	I0625 15:13:04.556283   22036 logs.go:123] Gathering logs for CRI-O ...
	I0625 15:13:04.556313   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0625 15:13:05.435811   22036 logs.go:123] Gathering logs for container status ...
	I0625 15:13:05.435853   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0625 15:13:05.484936   22036 logs.go:123] Gathering logs for kube-apiserver [6eac4596c6385aa7ddbb6ae3c9b8f3f3d8ba869a98304b46e9da40d81ecf5451] ...
	I0625 15:13:05.484967   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6eac4596c6385aa7ddbb6ae3c9b8f3f3d8ba869a98304b46e9da40d81ecf5451"
	I0625 15:13:05.532712   22036 logs.go:123] Gathering logs for dmesg ...
	I0625 15:13:05.532739   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0625 15:13:05.550054   22036 logs.go:123] Gathering logs for describe nodes ...
	I0625 15:13:05.550076   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0625 15:13:05.683030   22036 logs.go:123] Gathering logs for etcd [82fb38e37083c02fb84598409849737676746162308eb990930a9774c65406c2] ...
	I0625 15:13:05.683060   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82fb38e37083c02fb84598409849737676746162308eb990930a9774c65406c2"
	I0625 15:13:05.739333   22036 logs.go:123] Gathering logs for kube-scheduler [1606383c30eff93fd4aeae6f8d7357f8f087edf9c0fee53060b5a8a4ce8d6cb9] ...
	I0625 15:13:05.739366   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1606383c30eff93fd4aeae6f8d7357f8f087edf9c0fee53060b5a8a4ce8d6cb9"
	I0625 15:13:05.785782   22036 logs.go:123] Gathering logs for kubelet ...
	I0625 15:13:05.785810   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0625 15:13:05.838406   22036 logs.go:138] Found kubelet problem: Jun 25 15:11:27 addons-739670 kubelet[1267]: W0625 15:11:27.554632    1267 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-739670" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	W0625 15:13:05.838579   22036 logs.go:138] Found kubelet problem: Jun 25 15:11:27 addons-739670 kubelet[1267]: E0625 15:11:27.554667    1267 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-739670" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	W0625 15:13:05.838703   22036 logs.go:138] Found kubelet problem: Jun 25 15:11:27 addons-739670 kubelet[1267]: W0625 15:11:27.554704    1267 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-739670" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	W0625 15:13:05.838838   22036 logs.go:138] Found kubelet problem: Jun 25 15:11:27 addons-739670 kubelet[1267]: E0625 15:11:27.554713    1267 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-739670" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	I0625 15:13:05.871242   22036 out.go:304] Setting ErrFile to fd 2...
	I0625 15:13:05.871263   22036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0625 15:13:05.871311   22036 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0625 15:13:05.871327   22036 out.go:239]   Jun 25 15:11:27 addons-739670 kubelet[1267]: W0625 15:11:27.554632    1267 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-739670" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	  Jun 25 15:11:27 addons-739670 kubelet[1267]: W0625 15:11:27.554632    1267 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-739670" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	W0625 15:13:05.871336   22036 out.go:239]   Jun 25 15:11:27 addons-739670 kubelet[1267]: E0625 15:11:27.554667    1267 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-739670" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	  Jun 25 15:11:27 addons-739670 kubelet[1267]: E0625 15:11:27.554667    1267 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-739670" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	W0625 15:13:05.871347   22036 out.go:239]   Jun 25 15:11:27 addons-739670 kubelet[1267]: W0625 15:11:27.554704    1267 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-739670" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	  Jun 25 15:11:27 addons-739670 kubelet[1267]: W0625 15:11:27.554704    1267 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-739670" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	W0625 15:13:05.871358   22036 out.go:239]   Jun 25 15:11:27 addons-739670 kubelet[1267]: E0625 15:11:27.554713    1267 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-739670" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	  Jun 25 15:11:27 addons-739670 kubelet[1267]: E0625 15:11:27.554713    1267 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-739670" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	I0625 15:13:05.871365   22036 out.go:304] Setting ErrFile to fd 2...
	I0625 15:13:05.871375   22036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:13:15.872116   22036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 15:13:15.893125   22036 api_server.go:72] duration metric: took 1m52.465433388s to wait for apiserver process to appear ...
	I0625 15:13:15.893153   22036 api_server.go:88] waiting for apiserver healthz status ...
	I0625 15:13:15.893193   22036 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0625 15:13:15.893247   22036 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0625 15:13:15.949132   22036 cri.go:89] found id: "6eac4596c6385aa7ddbb6ae3c9b8f3f3d8ba869a98304b46e9da40d81ecf5451"
	I0625 15:13:15.949155   22036 cri.go:89] found id: ""
	I0625 15:13:15.949163   22036 logs.go:276] 1 containers: [6eac4596c6385aa7ddbb6ae3c9b8f3f3d8ba869a98304b46e9da40d81ecf5451]
	I0625 15:13:15.949217   22036 ssh_runner.go:195] Run: which crictl
	I0625 15:13:15.954649   22036 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0625 15:13:15.954712   22036 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0625 15:13:16.012903   22036 cri.go:89] found id: "82fb38e37083c02fb84598409849737676746162308eb990930a9774c65406c2"
	I0625 15:13:16.012924   22036 cri.go:89] found id: ""
	I0625 15:13:16.012932   22036 logs.go:276] 1 containers: [82fb38e37083c02fb84598409849737676746162308eb990930a9774c65406c2]
	I0625 15:13:16.012975   22036 ssh_runner.go:195] Run: which crictl
	I0625 15:13:16.017827   22036 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0625 15:13:16.017889   22036 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0625 15:13:16.061583   22036 cri.go:89] found id: "fed088c24fe61a44322ee3c5960565457b767a3bf79b1391f56be7500bc6c932"
	I0625 15:13:16.061602   22036 cri.go:89] found id: ""
	I0625 15:13:16.061610   22036 logs.go:276] 1 containers: [fed088c24fe61a44322ee3c5960565457b767a3bf79b1391f56be7500bc6c932]
	I0625 15:13:16.061660   22036 ssh_runner.go:195] Run: which crictl
	I0625 15:13:16.066413   22036 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0625 15:13:16.066483   22036 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0625 15:13:16.110254   22036 cri.go:89] found id: "1606383c30eff93fd4aeae6f8d7357f8f087edf9c0fee53060b5a8a4ce8d6cb9"
	I0625 15:13:16.110275   22036 cri.go:89] found id: ""
	I0625 15:13:16.110287   22036 logs.go:276] 1 containers: [1606383c30eff93fd4aeae6f8d7357f8f087edf9c0fee53060b5a8a4ce8d6cb9]
	I0625 15:13:16.110332   22036 ssh_runner.go:195] Run: which crictl
	I0625 15:13:16.114997   22036 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0625 15:13:16.115053   22036 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0625 15:13:16.154183   22036 cri.go:89] found id: "ded6970fc6788deeed0a1acc35b0de4645b3441748f4ff9237b15fd846638ec4"
	I0625 15:13:16.154203   22036 cri.go:89] found id: ""
	I0625 15:13:16.154211   22036 logs.go:276] 1 containers: [ded6970fc6788deeed0a1acc35b0de4645b3441748f4ff9237b15fd846638ec4]
	I0625 15:13:16.154254   22036 ssh_runner.go:195] Run: which crictl
	I0625 15:13:16.158719   22036 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0625 15:13:16.158787   22036 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0625 15:13:16.199159   22036 cri.go:89] found id: "4e062b5c6442a0590e72fc2a31f311d2f5a020fc3ef3527a120cab06bbcf12a3"
	I0625 15:13:16.199178   22036 cri.go:89] found id: ""
	I0625 15:13:16.199187   22036 logs.go:276] 1 containers: [4e062b5c6442a0590e72fc2a31f311d2f5a020fc3ef3527a120cab06bbcf12a3]
	I0625 15:13:16.199243   22036 ssh_runner.go:195] Run: which crictl
	I0625 15:13:16.203504   22036 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0625 15:13:16.203554   22036 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0625 15:13:16.250646   22036 cri.go:89] found id: ""
	I0625 15:13:16.250679   22036 logs.go:276] 0 containers: []
	W0625 15:13:16.250690   22036 logs.go:278] No container was found matching "kindnet"
	I0625 15:13:16.250703   22036 logs.go:123] Gathering logs for kubelet ...
	I0625 15:13:16.250718   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0625 15:13:16.301995   22036 logs.go:138] Found kubelet problem: Jun 25 15:11:27 addons-739670 kubelet[1267]: W0625 15:11:27.554632    1267 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-739670" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	W0625 15:13:16.302179   22036 logs.go:138] Found kubelet problem: Jun 25 15:11:27 addons-739670 kubelet[1267]: E0625 15:11:27.554667    1267 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-739670" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	W0625 15:13:16.302356   22036 logs.go:138] Found kubelet problem: Jun 25 15:11:27 addons-739670 kubelet[1267]: W0625 15:11:27.554704    1267 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-739670" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	W0625 15:13:16.302561   22036 logs.go:138] Found kubelet problem: Jun 25 15:11:27 addons-739670 kubelet[1267]: E0625 15:11:27.554713    1267 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-739670" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-739670' and this object
	I0625 15:13:16.335749   22036 logs.go:123] Gathering logs for dmesg ...
	I0625 15:13:16.335773   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0625 15:13:16.351109   22036 logs.go:123] Gathering logs for describe nodes ...
	I0625 15:13:16.351137   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0625 15:13:16.487061   22036 logs.go:123] Gathering logs for kube-proxy [ded6970fc6788deeed0a1acc35b0de4645b3441748f4ff9237b15fd846638ec4] ...
	I0625 15:13:16.487100   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ded6970fc6788deeed0a1acc35b0de4645b3441748f4ff9237b15fd846638ec4"
	I0625 15:13:16.565411   22036 logs.go:123] Gathering logs for container status ...
	I0625 15:13:16.565434   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0625 15:13:16.613549   22036 logs.go:123] Gathering logs for CRI-O ...
	I0625 15:13:16.613576   22036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-739670 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image rm gcr.io/google-containers/addon-resizer:functional-951282 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-951282 image rm gcr.io/google-containers/addon-resizer:functional-951282 --alsologtostderr: (3.042770137s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image ls
functional_test.go:402: expected "gcr.io/google-containers/addon-resizer:functional-951282" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (3.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 node stop m02 -v=7 --alsologtostderr
E0625 15:59:49.611671   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
E0625 16:00:10.092616   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
E0625 16:00:51.053530   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-674765 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.478935991s)

                                                
                                                
-- stdout --
	* Stopping node "ha-674765-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 15:59:48.596445   40137 out.go:291] Setting OutFile to fd 1 ...
	I0625 15:59:48.596799   40137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:59:48.596815   40137 out.go:304] Setting ErrFile to fd 2...
	I0625 15:59:48.596823   40137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:59:48.597083   40137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 15:59:48.597350   40137 mustload.go:65] Loading cluster: ha-674765
	I0625 15:59:48.597679   40137 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:59:48.597694   40137 stop.go:39] StopHost: ha-674765-m02
	I0625 15:59:48.598114   40137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:59:48.598172   40137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:59:48.614340   40137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41387
	I0625 15:59:48.614773   40137 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:59:48.615289   40137 main.go:141] libmachine: Using API Version  1
	I0625 15:59:48.615309   40137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:59:48.615676   40137 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:59:48.617587   40137 out.go:177] * Stopping node "ha-674765-m02"  ...
	I0625 15:59:48.619339   40137 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0625 15:59:48.619384   40137 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:59:48.619657   40137 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0625 15:59:48.619679   40137 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:59:48.622998   40137 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:59:48.623569   40137 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:59:48.623599   40137 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:59:48.623772   40137 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:59:48.623953   40137 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:59:48.624230   40137 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:59:48.624422   40137 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	I0625 15:59:48.714388   40137 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0625 15:59:48.775054   40137 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0625 15:59:48.831966   40137 main.go:141] libmachine: Stopping "ha-674765-m02"...
	I0625 15:59:48.832017   40137 main.go:141] libmachine: (ha-674765-m02) Calling .GetState
	I0625 15:59:48.833589   40137 main.go:141] libmachine: (ha-674765-m02) Calling .Stop
	I0625 15:59:48.836869   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 0/120
	I0625 15:59:49.838175   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 1/120
	I0625 15:59:50.839829   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 2/120
	I0625 15:59:51.841501   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 3/120
	I0625 15:59:52.842859   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 4/120
	I0625 15:59:53.844619   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 5/120
	I0625 15:59:54.845748   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 6/120
	I0625 15:59:55.847061   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 7/120
	I0625 15:59:56.849247   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 8/120
	I0625 15:59:57.850563   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 9/120
	I0625 15:59:58.852653   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 10/120
	I0625 15:59:59.854033   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 11/120
	I0625 16:00:00.855216   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 12/120
	I0625 16:00:01.856886   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 13/120
	I0625 16:00:02.858525   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 14/120
	I0625 16:00:03.860467   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 15/120
	I0625 16:00:04.862734   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 16/120
	I0625 16:00:05.864880   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 17/120
	I0625 16:00:06.866269   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 18/120
	I0625 16:00:07.867806   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 19/120
	I0625 16:00:08.869653   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 20/120
	I0625 16:00:09.871143   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 21/120
	I0625 16:00:10.873151   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 22/120
	I0625 16:00:11.874448   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 23/120
	I0625 16:00:12.875796   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 24/120
	I0625 16:00:13.877559   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 25/120
	I0625 16:00:14.878780   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 26/120
	I0625 16:00:15.880815   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 27/120
	I0625 16:00:16.882982   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 28/120
	I0625 16:00:17.884888   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 29/120
	I0625 16:00:18.886828   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 30/120
	I0625 16:00:19.889043   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 31/120
	I0625 16:00:20.890405   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 32/120
	I0625 16:00:21.892071   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 33/120
	I0625 16:00:22.893472   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 34/120
	I0625 16:00:23.895457   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 35/120
	I0625 16:00:24.897576   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 36/120
	I0625 16:00:25.898928   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 37/120
	I0625 16:00:26.900806   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 38/120
	I0625 16:00:27.902044   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 39/120
	I0625 16:00:28.904349   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 40/120
	I0625 16:00:29.906563   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 41/120
	I0625 16:00:30.908000   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 42/120
	I0625 16:00:31.909333   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 43/120
	I0625 16:00:32.910635   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 44/120
	I0625 16:00:33.912521   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 45/120
	I0625 16:00:34.913888   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 46/120
	I0625 16:00:35.915115   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 47/120
	I0625 16:00:36.916351   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 48/120
	I0625 16:00:37.917589   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 49/120
	I0625 16:00:38.919593   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 50/120
	I0625 16:00:39.920902   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 51/120
	I0625 16:00:40.922216   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 52/120
	I0625 16:00:41.923741   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 53/120
	I0625 16:00:42.924949   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 54/120
	I0625 16:00:43.926696   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 55/120
	I0625 16:00:44.928012   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 56/120
	I0625 16:00:45.929253   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 57/120
	I0625 16:00:46.930551   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 58/120
	I0625 16:00:47.931725   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 59/120
	I0625 16:00:48.933621   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 60/120
	I0625 16:00:49.935781   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 61/120
	I0625 16:00:50.937108   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 62/120
	I0625 16:00:51.938283   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 63/120
	I0625 16:00:52.939506   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 64/120
	I0625 16:00:53.941430   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 65/120
	I0625 16:00:54.943495   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 66/120
	I0625 16:00:55.944736   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 67/120
	I0625 16:00:56.946763   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 68/120
	I0625 16:00:57.949005   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 69/120
	I0625 16:00:58.951200   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 70/120
	I0625 16:00:59.952408   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 71/120
	I0625 16:01:00.954557   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 72/120
	I0625 16:01:01.955720   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 73/120
	I0625 16:01:02.956891   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 74/120
	I0625 16:01:03.958510   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 75/120
	I0625 16:01:04.959908   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 76/120
	I0625 16:01:05.961036   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 77/120
	I0625 16:01:06.962685   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 78/120
	I0625 16:01:07.964992   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 79/120
	I0625 16:01:08.967010   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 80/120
	I0625 16:01:09.968983   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 81/120
	I0625 16:01:10.970496   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 82/120
	I0625 16:01:11.972551   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 83/120
	I0625 16:01:12.973906   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 84/120
	I0625 16:01:13.975798   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 85/120
	I0625 16:01:14.977095   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 86/120
	I0625 16:01:15.978560   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 87/120
	I0625 16:01:16.979902   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 88/120
	I0625 16:01:17.981104   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 89/120
	I0625 16:01:18.983291   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 90/120
	I0625 16:01:19.984577   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 91/120
	I0625 16:01:20.986228   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 92/120
	I0625 16:01:21.988080   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 93/120
	I0625 16:01:22.989230   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 94/120
	I0625 16:01:23.991110   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 95/120
	I0625 16:01:24.992463   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 96/120
	I0625 16:01:25.994329   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 97/120
	I0625 16:01:26.995655   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 98/120
	I0625 16:01:27.997677   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 99/120
	I0625 16:01:28.999760   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 100/120
	I0625 16:01:30.001016   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 101/120
	I0625 16:01:31.002440   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 102/120
	I0625 16:01:32.003717   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 103/120
	I0625 16:01:33.005041   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 104/120
	I0625 16:01:34.007099   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 105/120
	I0625 16:01:35.008429   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 106/120
	I0625 16:01:36.009798   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 107/120
	I0625 16:01:37.011053   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 108/120
	I0625 16:01:38.013014   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 109/120
	I0625 16:01:39.014552   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 110/120
	I0625 16:01:40.015690   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 111/120
	I0625 16:01:41.017899   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 112/120
	I0625 16:01:42.019592   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 113/120
	I0625 16:01:43.020864   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 114/120
	I0625 16:01:44.022730   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 115/120
	I0625 16:01:45.024893   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 116/120
	I0625 16:01:46.027245   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 117/120
	I0625 16:01:47.028737   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 118/120
	I0625 16:01:48.030107   40137 main.go:141] libmachine: (ha-674765-m02) Waiting for machine to stop 119/120
	I0625 16:01:49.031293   40137 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0625 16:01:49.031432   40137 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-674765 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr: exit status 3 (19.095133906s)

                                                
                                                
-- stdout --
	ha-674765
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-674765-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:01:49.074778   40575 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:01:49.075024   40575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:01:49.075034   40575 out.go:304] Setting ErrFile to fd 2...
	I0625 16:01:49.075047   40575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:01:49.075253   40575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:01:49.075454   40575 out.go:298] Setting JSON to false
	I0625 16:01:49.075476   40575 mustload.go:65] Loading cluster: ha-674765
	I0625 16:01:49.075582   40575 notify.go:220] Checking for updates...
	I0625 16:01:49.075905   40575 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:01:49.075920   40575 status.go:255] checking status of ha-674765 ...
	I0625 16:01:49.076330   40575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:01:49.076400   40575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:01:49.091081   40575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0625 16:01:49.091537   40575 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:01:49.092022   40575 main.go:141] libmachine: Using API Version  1
	I0625 16:01:49.092047   40575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:01:49.092305   40575 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:01:49.092486   40575 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 16:01:49.093932   40575 status.go:330] ha-674765 host status = "Running" (err=<nil>)
	I0625 16:01:49.093957   40575 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:01:49.094274   40575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:01:49.094332   40575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:01:49.108161   40575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39163
	I0625 16:01:49.108474   40575 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:01:49.108883   40575 main.go:141] libmachine: Using API Version  1
	I0625 16:01:49.108907   40575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:01:49.109201   40575 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:01:49.109385   40575 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:01:49.112184   40575 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:01:49.112636   40575 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:01:49.112658   40575 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:01:49.112819   40575 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:01:49.113177   40575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:01:49.113216   40575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:01:49.127510   40575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
	I0625 16:01:49.127950   40575 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:01:49.128353   40575 main.go:141] libmachine: Using API Version  1
	I0625 16:01:49.128372   40575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:01:49.128692   40575 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:01:49.128851   40575 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:01:49.129032   40575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:01:49.129057   40575 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:01:49.131411   40575 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:01:49.131786   40575 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:01:49.131812   40575 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:01:49.131949   40575 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:01:49.132114   40575 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:01:49.132270   40575 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:01:49.132410   40575 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:01:49.219677   40575 ssh_runner.go:195] Run: systemctl --version
	I0625 16:01:49.226501   40575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:01:49.243184   40575 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:01:49.243218   40575 api_server.go:166] Checking apiserver status ...
	I0625 16:01:49.243257   40575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:01:49.260532   40575 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup
	W0625 16:01:49.271819   40575 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:01:49.271861   40575 ssh_runner.go:195] Run: ls
	I0625 16:01:49.276867   40575 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:01:49.281148   40575 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:01:49.281174   40575 status.go:422] ha-674765 apiserver status = Running (err=<nil>)
	I0625 16:01:49.281187   40575 status.go:257] ha-674765 status: &{Name:ha-674765 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:01:49.281210   40575 status.go:255] checking status of ha-674765-m02 ...
	I0625 16:01:49.281561   40575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:01:49.281595   40575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:01:49.295883   40575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45069
	I0625 16:01:49.296279   40575 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:01:49.296733   40575 main.go:141] libmachine: Using API Version  1
	I0625 16:01:49.296752   40575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:01:49.297009   40575 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:01:49.297203   40575 main.go:141] libmachine: (ha-674765-m02) Calling .GetState
	I0625 16:01:49.298614   40575 status.go:330] ha-674765-m02 host status = "Running" (err=<nil>)
	I0625 16:01:49.298631   40575 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:01:49.298924   40575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:01:49.298977   40575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:01:49.312799   40575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39641
	I0625 16:01:49.313161   40575 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:01:49.313600   40575 main.go:141] libmachine: Using API Version  1
	I0625 16:01:49.313622   40575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:01:49.313908   40575 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:01:49.314135   40575 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 16:01:49.316473   40575 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:01:49.316882   40575 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:01:49.316902   40575 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:01:49.316995   40575 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:01:49.317279   40575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:01:49.317312   40575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:01:49.332057   40575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33375
	I0625 16:01:49.332487   40575 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:01:49.332911   40575 main.go:141] libmachine: Using API Version  1
	I0625 16:01:49.332931   40575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:01:49.333252   40575 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:01:49.333426   40575 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 16:01:49.333599   40575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:01:49.333616   40575 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 16:01:49.336652   40575 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:01:49.337033   40575 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:01:49.337056   40575 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:01:49.337159   40575 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 16:01:49.337274   40575 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 16:01:49.337388   40575 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 16:01:49.337526   40575 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	W0625 16:02:07.762676   40575 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.53:22: connect: no route to host
	W0625 16:02:07.762750   40575 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	E0625 16:02:07.762778   40575 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:07.762789   40575 status.go:257] ha-674765-m02 status: &{Name:ha-674765-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0625 16:02:07.762813   40575 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:07.762821   40575 status.go:255] checking status of ha-674765-m03 ...
	I0625 16:02:07.763229   40575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:07.763283   40575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:07.779117   40575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42141
	I0625 16:02:07.779558   40575 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:07.780000   40575 main.go:141] libmachine: Using API Version  1
	I0625 16:02:07.780019   40575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:07.780348   40575 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:07.780538   40575 main.go:141] libmachine: (ha-674765-m03) Calling .GetState
	I0625 16:02:07.782255   40575 status.go:330] ha-674765-m03 host status = "Running" (err=<nil>)
	I0625 16:02:07.782273   40575 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:07.782663   40575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:07.782707   40575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:07.797472   40575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
	I0625 16:02:07.797869   40575 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:07.798292   40575 main.go:141] libmachine: Using API Version  1
	I0625 16:02:07.798321   40575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:07.798579   40575 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:07.798709   40575 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 16:02:07.801050   40575 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:07.801437   40575 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:07.801465   40575 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:07.801563   40575 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:07.801846   40575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:07.801897   40575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:07.816760   40575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0625 16:02:07.817068   40575 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:07.817477   40575 main.go:141] libmachine: Using API Version  1
	I0625 16:02:07.817494   40575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:07.817744   40575 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:07.817921   40575 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 16:02:07.818100   40575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:07.818120   40575 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 16:02:07.820819   40575 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:07.821243   40575 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:07.821280   40575 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:07.821428   40575 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 16:02:07.821603   40575 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 16:02:07.821750   40575 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 16:02:07.821890   40575 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 16:02:07.916279   40575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:07.934967   40575 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:07.934998   40575 api_server.go:166] Checking apiserver status ...
	I0625 16:02:07.935039   40575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:07.950398   40575 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup
	W0625 16:02:07.960174   40575 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:07.960225   40575 ssh_runner.go:195] Run: ls
	I0625 16:02:07.965833   40575 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:07.969924   40575 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:07.969941   40575 status.go:422] ha-674765-m03 apiserver status = Running (err=<nil>)
	I0625 16:02:07.969949   40575 status.go:257] ha-674765-m03 status: &{Name:ha-674765-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:07.969961   40575 status.go:255] checking status of ha-674765-m04 ...
	I0625 16:02:07.970236   40575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:07.970267   40575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:07.985154   40575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39831
	I0625 16:02:07.985555   40575 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:07.986094   40575 main.go:141] libmachine: Using API Version  1
	I0625 16:02:07.986115   40575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:07.986381   40575 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:07.986570   40575 main.go:141] libmachine: (ha-674765-m04) Calling .GetState
	I0625 16:02:07.988153   40575 status.go:330] ha-674765-m04 host status = "Running" (err=<nil>)
	I0625 16:02:07.988165   40575 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:07.988430   40575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:07.988458   40575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:08.003153   40575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40455
	I0625 16:02:08.003488   40575 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:08.003949   40575 main.go:141] libmachine: Using API Version  1
	I0625 16:02:08.003972   40575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:08.004259   40575 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:08.004424   40575 main.go:141] libmachine: (ha-674765-m04) Calling .GetIP
	I0625 16:02:08.007021   40575 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:08.007389   40575 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:08.007428   40575 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:08.007536   40575 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:08.007823   40575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:08.007861   40575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:08.022157   40575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32783
	I0625 16:02:08.022455   40575 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:08.022898   40575 main.go:141] libmachine: Using API Version  1
	I0625 16:02:08.022921   40575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:08.023269   40575 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:08.023438   40575 main.go:141] libmachine: (ha-674765-m04) Calling .DriverName
	I0625 16:02:08.023612   40575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:08.023628   40575 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHHostname
	I0625 16:02:08.025867   40575 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:08.026184   40575 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:08.026207   40575 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:08.026310   40575 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHPort
	I0625 16:02:08.026460   40575 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHKeyPath
	I0625 16:02:08.026613   40575 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHUsername
	I0625 16:02:08.026746   40575 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m04/id_rsa Username:docker}
	I0625 16:02:08.110964   40575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:08.126343   40575 status.go:257] ha-674765-m04 status: &{Name:ha-674765-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-674765 -n ha-674765
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-674765 logs -n 25: (1.390947401s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-674765 cp ha-674765-m03:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2213486447/001/cp-test_ha-674765-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m03:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765:/home/docker/cp-test_ha-674765-m03_ha-674765.txt                       |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765 sudo cat                                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m03_ha-674765.txt                                 |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m03:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m02:/home/docker/cp-test_ha-674765-m03_ha-674765-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m02 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m03_ha-674765-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m03:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04:/home/docker/cp-test_ha-674765-m03_ha-674765-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m04 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m03_ha-674765-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-674765 cp testdata/cp-test.txt                                                | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2213486447/001/cp-test_ha-674765-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765:/home/docker/cp-test_ha-674765-m04_ha-674765.txt                       |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765 sudo cat                                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m04_ha-674765.txt                                 |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m02:/home/docker/cp-test_ha-674765-m04_ha-674765-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m02 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m04_ha-674765-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03:/home/docker/cp-test_ha-674765-m04_ha-674765-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m03 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m04_ha-674765-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-674765 node stop m02 -v=7                                                     | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/25 15:55:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0625 15:55:24.665579   36162 out.go:291] Setting OutFile to fd 1 ...
	I0625 15:55:24.665814   36162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:55:24.665822   36162 out.go:304] Setting ErrFile to fd 2...
	I0625 15:55:24.665826   36162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:55:24.665992   36162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 15:55:24.666568   36162 out.go:298] Setting JSON to false
	I0625 15:55:24.667432   36162 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5869,"bootTime":1719325056,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 15:55:24.667481   36162 start.go:139] virtualization: kvm guest
	I0625 15:55:24.669441   36162 out.go:177] * [ha-674765] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0625 15:55:24.671072   36162 out.go:177]   - MINIKUBE_LOCATION=19128
	I0625 15:55:24.671130   36162 notify.go:220] Checking for updates...
	I0625 15:55:24.673413   36162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 15:55:24.674621   36162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:55:24.675912   36162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:55:24.677153   36162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0625 15:55:24.678419   36162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0625 15:55:24.679894   36162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 15:55:24.712722   36162 out.go:177] * Using the kvm2 driver based on user configuration
	I0625 15:55:24.714064   36162 start.go:297] selected driver: kvm2
	I0625 15:55:24.714080   36162 start.go:901] validating driver "kvm2" against <nil>
	I0625 15:55:24.714097   36162 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0625 15:55:24.714793   36162 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 15:55:24.714863   36162 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19128-13846/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0625 15:55:24.728271   36162 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0625 15:55:24.728309   36162 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0625 15:55:24.728479   36162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 15:55:24.728536   36162 cni.go:84] Creating CNI manager for ""
	I0625 15:55:24.728549   36162 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0625 15:55:24.728554   36162 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0625 15:55:24.728604   36162 start.go:340] cluster config:
	{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0625 15:55:24.728681   36162 iso.go:125] acquiring lock: {Name:mk76df652d5e768afc73443035d5ecb8b75ed16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 15:55:24.730321   36162 out.go:177] * Starting "ha-674765" primary control-plane node in "ha-674765" cluster
	I0625 15:55:24.731585   36162 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 15:55:24.731613   36162 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0625 15:55:24.731623   36162 cache.go:56] Caching tarball of preloaded images
	I0625 15:55:24.731701   36162 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 15:55:24.731711   36162 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0625 15:55:24.732023   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:55:24.732062   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json: {Name:mke8b11320ef2be457ca4f9c0954f95e94f8e488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:24.732217   36162 start.go:360] acquireMachinesLock for ha-674765: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 15:55:24.732244   36162 start.go:364] duration metric: took 14.976µs to acquireMachinesLock for "ha-674765"
	I0625 15:55:24.732259   36162 start.go:93] Provisioning new machine with config: &{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:55:24.732320   36162 start.go:125] createHost starting for "" (driver="kvm2")
	I0625 15:55:24.734603   36162 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0625 15:55:24.734725   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:55:24.734760   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:55:24.747979   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44221
	I0625 15:55:24.748409   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:55:24.748974   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:55:24.748995   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:55:24.749268   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:55:24.749434   36162 main.go:141] libmachine: (ha-674765) Calling .GetMachineName
	I0625 15:55:24.749539   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:24.749674   36162 start.go:159] libmachine.API.Create for "ha-674765" (driver="kvm2")
	I0625 15:55:24.749698   36162 client.go:168] LocalClient.Create starting
	I0625 15:55:24.749736   36162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem
	I0625 15:55:24.749770   36162 main.go:141] libmachine: Decoding PEM data...
	I0625 15:55:24.749788   36162 main.go:141] libmachine: Parsing certificate...
	I0625 15:55:24.749857   36162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem
	I0625 15:55:24.749880   36162 main.go:141] libmachine: Decoding PEM data...
	I0625 15:55:24.749897   36162 main.go:141] libmachine: Parsing certificate...
	I0625 15:55:24.749931   36162 main.go:141] libmachine: Running pre-create checks...
	I0625 15:55:24.749943   36162 main.go:141] libmachine: (ha-674765) Calling .PreCreateCheck
	I0625 15:55:24.750218   36162 main.go:141] libmachine: (ha-674765) Calling .GetConfigRaw
	I0625 15:55:24.750575   36162 main.go:141] libmachine: Creating machine...
	I0625 15:55:24.750588   36162 main.go:141] libmachine: (ha-674765) Calling .Create
	I0625 15:55:24.750681   36162 main.go:141] libmachine: (ha-674765) Creating KVM machine...
	I0625 15:55:24.751783   36162 main.go:141] libmachine: (ha-674765) DBG | found existing default KVM network
	I0625 15:55:24.752430   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:24.752298   36185 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091e0}
	I0625 15:55:24.752448   36162 main.go:141] libmachine: (ha-674765) DBG | created network xml: 
	I0625 15:55:24.752460   36162 main.go:141] libmachine: (ha-674765) DBG | <network>
	I0625 15:55:24.752472   36162 main.go:141] libmachine: (ha-674765) DBG |   <name>mk-ha-674765</name>
	I0625 15:55:24.752485   36162 main.go:141] libmachine: (ha-674765) DBG |   <dns enable='no'/>
	I0625 15:55:24.752495   36162 main.go:141] libmachine: (ha-674765) DBG |   
	I0625 15:55:24.752507   36162 main.go:141] libmachine: (ha-674765) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0625 15:55:24.752517   36162 main.go:141] libmachine: (ha-674765) DBG |     <dhcp>
	I0625 15:55:24.752542   36162 main.go:141] libmachine: (ha-674765) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0625 15:55:24.752566   36162 main.go:141] libmachine: (ha-674765) DBG |     </dhcp>
	I0625 15:55:24.752574   36162 main.go:141] libmachine: (ha-674765) DBG |   </ip>
	I0625 15:55:24.752581   36162 main.go:141] libmachine: (ha-674765) DBG |   
	I0625 15:55:24.752586   36162 main.go:141] libmachine: (ha-674765) DBG | </network>
	I0625 15:55:24.752593   36162 main.go:141] libmachine: (ha-674765) DBG | 
	I0625 15:55:24.757461   36162 main.go:141] libmachine: (ha-674765) DBG | trying to create private KVM network mk-ha-674765 192.168.39.0/24...
	I0625 15:55:24.820245   36162 main.go:141] libmachine: (ha-674765) DBG | private KVM network mk-ha-674765 192.168.39.0/24 created
	I0625 15:55:24.820274   36162 main.go:141] libmachine: (ha-674765) Setting up store path in /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765 ...
	I0625 15:55:24.820294   36162 main.go:141] libmachine: (ha-674765) Building disk image from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso
	I0625 15:55:24.820314   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:24.820252   36185 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:55:24.820390   36162 main.go:141] libmachine: (ha-674765) Downloading /home/jenkins/minikube-integration/19128-13846/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso...
	I0625 15:55:25.050812   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:25.050696   36185 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa...
	I0625 15:55:25.288789   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:25.288649   36185 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/ha-674765.rawdisk...
	I0625 15:55:25.288820   36162 main.go:141] libmachine: (ha-674765) DBG | Writing magic tar header
	I0625 15:55:25.288833   36162 main.go:141] libmachine: (ha-674765) DBG | Writing SSH key tar header
	I0625 15:55:25.288847   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:25.288760   36185 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765 ...
	I0625 15:55:25.288868   36162 main.go:141] libmachine: (ha-674765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765
	I0625 15:55:25.288876   36162 main.go:141] libmachine: (ha-674765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines
	I0625 15:55:25.288884   36162 main.go:141] libmachine: (ha-674765) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765 (perms=drwx------)
	I0625 15:55:25.288894   36162 main.go:141] libmachine: (ha-674765) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines (perms=drwxr-xr-x)
	I0625 15:55:25.288900   36162 main.go:141] libmachine: (ha-674765) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube (perms=drwxr-xr-x)
	I0625 15:55:25.288906   36162 main.go:141] libmachine: (ha-674765) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846 (perms=drwxrwxr-x)
	I0625 15:55:25.288911   36162 main.go:141] libmachine: (ha-674765) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0625 15:55:25.288920   36162 main.go:141] libmachine: (ha-674765) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0625 15:55:25.288927   36162 main.go:141] libmachine: (ha-674765) Creating domain...
	I0625 15:55:25.288939   36162 main.go:141] libmachine: (ha-674765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:55:25.288955   36162 main.go:141] libmachine: (ha-674765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846
	I0625 15:55:25.288965   36162 main.go:141] libmachine: (ha-674765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0625 15:55:25.288972   36162 main.go:141] libmachine: (ha-674765) DBG | Checking permissions on dir: /home/jenkins
	I0625 15:55:25.288980   36162 main.go:141] libmachine: (ha-674765) DBG | Checking permissions on dir: /home
	I0625 15:55:25.288987   36162 main.go:141] libmachine: (ha-674765) DBG | Skipping /home - not owner
	I0625 15:55:25.289917   36162 main.go:141] libmachine: (ha-674765) define libvirt domain using xml: 
	I0625 15:55:25.289940   36162 main.go:141] libmachine: (ha-674765) <domain type='kvm'>
	I0625 15:55:25.289962   36162 main.go:141] libmachine: (ha-674765)   <name>ha-674765</name>
	I0625 15:55:25.289976   36162 main.go:141] libmachine: (ha-674765)   <memory unit='MiB'>2200</memory>
	I0625 15:55:25.290008   36162 main.go:141] libmachine: (ha-674765)   <vcpu>2</vcpu>
	I0625 15:55:25.290030   36162 main.go:141] libmachine: (ha-674765)   <features>
	I0625 15:55:25.290045   36162 main.go:141] libmachine: (ha-674765)     <acpi/>
	I0625 15:55:25.290061   36162 main.go:141] libmachine: (ha-674765)     <apic/>
	I0625 15:55:25.290070   36162 main.go:141] libmachine: (ha-674765)     <pae/>
	I0625 15:55:25.290081   36162 main.go:141] libmachine: (ha-674765)     
	I0625 15:55:25.290091   36162 main.go:141] libmachine: (ha-674765)   </features>
	I0625 15:55:25.290102   36162 main.go:141] libmachine: (ha-674765)   <cpu mode='host-passthrough'>
	I0625 15:55:25.290112   36162 main.go:141] libmachine: (ha-674765)   
	I0625 15:55:25.290123   36162 main.go:141] libmachine: (ha-674765)   </cpu>
	I0625 15:55:25.290138   36162 main.go:141] libmachine: (ha-674765)   <os>
	I0625 15:55:25.290151   36162 main.go:141] libmachine: (ha-674765)     <type>hvm</type>
	I0625 15:55:25.290163   36162 main.go:141] libmachine: (ha-674765)     <boot dev='cdrom'/>
	I0625 15:55:25.290173   36162 main.go:141] libmachine: (ha-674765)     <boot dev='hd'/>
	I0625 15:55:25.290186   36162 main.go:141] libmachine: (ha-674765)     <bootmenu enable='no'/>
	I0625 15:55:25.290193   36162 main.go:141] libmachine: (ha-674765)   </os>
	I0625 15:55:25.290199   36162 main.go:141] libmachine: (ha-674765)   <devices>
	I0625 15:55:25.290206   36162 main.go:141] libmachine: (ha-674765)     <disk type='file' device='cdrom'>
	I0625 15:55:25.290214   36162 main.go:141] libmachine: (ha-674765)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/boot2docker.iso'/>
	I0625 15:55:25.290222   36162 main.go:141] libmachine: (ha-674765)       <target dev='hdc' bus='scsi'/>
	I0625 15:55:25.290227   36162 main.go:141] libmachine: (ha-674765)       <readonly/>
	I0625 15:55:25.290245   36162 main.go:141] libmachine: (ha-674765)     </disk>
	I0625 15:55:25.290253   36162 main.go:141] libmachine: (ha-674765)     <disk type='file' device='disk'>
	I0625 15:55:25.290259   36162 main.go:141] libmachine: (ha-674765)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0625 15:55:25.290268   36162 main.go:141] libmachine: (ha-674765)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/ha-674765.rawdisk'/>
	I0625 15:55:25.290273   36162 main.go:141] libmachine: (ha-674765)       <target dev='hda' bus='virtio'/>
	I0625 15:55:25.290281   36162 main.go:141] libmachine: (ha-674765)     </disk>
	I0625 15:55:25.290285   36162 main.go:141] libmachine: (ha-674765)     <interface type='network'>
	I0625 15:55:25.290295   36162 main.go:141] libmachine: (ha-674765)       <source network='mk-ha-674765'/>
	I0625 15:55:25.290304   36162 main.go:141] libmachine: (ha-674765)       <model type='virtio'/>
	I0625 15:55:25.290312   36162 main.go:141] libmachine: (ha-674765)     </interface>
	I0625 15:55:25.290322   36162 main.go:141] libmachine: (ha-674765)     <interface type='network'>
	I0625 15:55:25.290328   36162 main.go:141] libmachine: (ha-674765)       <source network='default'/>
	I0625 15:55:25.290335   36162 main.go:141] libmachine: (ha-674765)       <model type='virtio'/>
	I0625 15:55:25.290340   36162 main.go:141] libmachine: (ha-674765)     </interface>
	I0625 15:55:25.290346   36162 main.go:141] libmachine: (ha-674765)     <serial type='pty'>
	I0625 15:55:25.290351   36162 main.go:141] libmachine: (ha-674765)       <target port='0'/>
	I0625 15:55:25.290357   36162 main.go:141] libmachine: (ha-674765)     </serial>
	I0625 15:55:25.290362   36162 main.go:141] libmachine: (ha-674765)     <console type='pty'>
	I0625 15:55:25.290367   36162 main.go:141] libmachine: (ha-674765)       <target type='serial' port='0'/>
	I0625 15:55:25.290374   36162 main.go:141] libmachine: (ha-674765)     </console>
	I0625 15:55:25.290379   36162 main.go:141] libmachine: (ha-674765)     <rng model='virtio'>
	I0625 15:55:25.290387   36162 main.go:141] libmachine: (ha-674765)       <backend model='random'>/dev/random</backend>
	I0625 15:55:25.290390   36162 main.go:141] libmachine: (ha-674765)     </rng>
	I0625 15:55:25.290397   36162 main.go:141] libmachine: (ha-674765)     
	I0625 15:55:25.290402   36162 main.go:141] libmachine: (ha-674765)     
	I0625 15:55:25.290422   36162 main.go:141] libmachine: (ha-674765)   </devices>
	I0625 15:55:25.290441   36162 main.go:141] libmachine: (ha-674765) </domain>
	I0625 15:55:25.290492   36162 main.go:141] libmachine: (ha-674765) 
	I0625 15:55:25.294419   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:e3:7e:66 in network default
	I0625 15:55:25.294939   36162 main.go:141] libmachine: (ha-674765) Ensuring networks are active...
	I0625 15:55:25.294974   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:25.295556   36162 main.go:141] libmachine: (ha-674765) Ensuring network default is active
	I0625 15:55:25.295817   36162 main.go:141] libmachine: (ha-674765) Ensuring network mk-ha-674765 is active
	I0625 15:55:25.296305   36162 main.go:141] libmachine: (ha-674765) Getting domain xml...
	I0625 15:55:25.296924   36162 main.go:141] libmachine: (ha-674765) Creating domain...
	I0625 15:55:26.449225   36162 main.go:141] libmachine: (ha-674765) Waiting to get IP...
	I0625 15:55:26.450173   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:26.450538   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:26.450575   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:26.450528   36185 retry.go:31] will retry after 222.087964ms: waiting for machine to come up
	I0625 15:55:26.673822   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:26.674220   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:26.674256   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:26.674178   36185 retry.go:31] will retry after 287.859085ms: waiting for machine to come up
	I0625 15:55:26.963685   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:26.964090   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:26.964118   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:26.964047   36185 retry.go:31] will retry after 424.000535ms: waiting for machine to come up
	I0625 15:55:27.389554   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:27.389984   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:27.390007   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:27.389946   36185 retry.go:31] will retry after 387.926466ms: waiting for machine to come up
	I0625 15:55:27.779437   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:27.779809   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:27.779829   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:27.779786   36185 retry.go:31] will retry after 561.030334ms: waiting for machine to come up
	I0625 15:55:28.342538   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:28.342974   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:28.342999   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:28.342938   36185 retry.go:31] will retry after 584.411363ms: waiting for machine to come up
	I0625 15:55:28.928603   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:28.928954   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:28.928978   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:28.928906   36185 retry.go:31] will retry after 1.187786363s: waiting for machine to come up
	I0625 15:55:30.118698   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:30.119085   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:30.119113   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:30.119029   36185 retry.go:31] will retry after 1.349507736s: waiting for machine to come up
	I0625 15:55:31.470570   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:31.470992   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:31.471019   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:31.470961   36185 retry.go:31] will retry after 1.622865794s: waiting for machine to come up
	I0625 15:55:33.095647   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:33.095979   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:33.096027   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:33.095945   36185 retry.go:31] will retry after 2.243945522s: waiting for machine to come up
	I0625 15:55:35.341661   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:35.342056   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:35.342081   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:35.342028   36185 retry.go:31] will retry after 2.325430801s: waiting for machine to come up
	I0625 15:55:37.670562   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:37.670939   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:37.670967   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:37.670902   36185 retry.go:31] will retry after 3.014906519s: waiting for machine to come up
	I0625 15:55:40.686901   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:40.687334   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:40.687359   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:40.687272   36185 retry.go:31] will retry after 3.1399809s: waiting for machine to come up
	I0625 15:55:43.830396   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:43.830740   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:43.830762   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:43.830697   36185 retry.go:31] will retry after 4.710057228s: waiting for machine to come up
	I0625 15:55:48.545128   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.545528   36162 main.go:141] libmachine: (ha-674765) Found IP for machine: 192.168.39.128
	I0625 15:55:48.545552   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has current primary IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.545560   36162 main.go:141] libmachine: (ha-674765) Reserving static IP address...
	I0625 15:55:48.545852   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find host DHCP lease matching {name: "ha-674765", mac: "52:54:00:6e:3a:48", ip: "192.168.39.128"} in network mk-ha-674765
	I0625 15:55:48.613796   36162 main.go:141] libmachine: (ha-674765) DBG | Getting to WaitForSSH function...
	I0625 15:55:48.613824   36162 main.go:141] libmachine: (ha-674765) Reserved static IP address: 192.168.39.128
	I0625 15:55:48.613838   36162 main.go:141] libmachine: (ha-674765) Waiting for SSH to be available...
	I0625 15:55:48.616086   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.616408   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:48.616433   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.616588   36162 main.go:141] libmachine: (ha-674765) DBG | Using SSH client type: external
	I0625 15:55:48.616613   36162 main.go:141] libmachine: (ha-674765) DBG | Using SSH private key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa (-rw-------)
	I0625 15:55:48.616651   36162 main.go:141] libmachine: (ha-674765) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0625 15:55:48.616671   36162 main.go:141] libmachine: (ha-674765) DBG | About to run SSH command:
	I0625 15:55:48.616687   36162 main.go:141] libmachine: (ha-674765) DBG | exit 0
	I0625 15:55:48.741955   36162 main.go:141] libmachine: (ha-674765) DBG | SSH cmd err, output: <nil>: 
	I0625 15:55:48.742241   36162 main.go:141] libmachine: (ha-674765) KVM machine creation complete!
	I0625 15:55:48.742529   36162 main.go:141] libmachine: (ha-674765) Calling .GetConfigRaw
	I0625 15:55:48.743022   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:48.743198   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:48.743336   36162 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0625 15:55:48.743350   36162 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 15:55:48.744495   36162 main.go:141] libmachine: Detecting operating system of created instance...
	I0625 15:55:48.744510   36162 main.go:141] libmachine: Waiting for SSH to be available...
	I0625 15:55:48.744525   36162 main.go:141] libmachine: Getting to WaitForSSH function...
	I0625 15:55:48.744535   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:48.746567   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.746928   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:48.746955   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.747081   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:48.747237   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:48.747396   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:48.747624   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:48.747780   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:55:48.747953   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 15:55:48.747963   36162 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0625 15:55:48.853464   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 15:55:48.853495   36162 main.go:141] libmachine: Detecting the provisioner...
	I0625 15:55:48.853502   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:48.856395   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.856736   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:48.856773   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.856914   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:48.857123   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:48.857372   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:48.857530   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:48.857693   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:55:48.857891   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 15:55:48.857903   36162 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0625 15:55:48.966886   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0625 15:55:48.967012   36162 main.go:141] libmachine: found compatible host: buildroot
	I0625 15:55:48.967031   36162 main.go:141] libmachine: Provisioning with buildroot...
	I0625 15:55:48.967052   36162 main.go:141] libmachine: (ha-674765) Calling .GetMachineName
	I0625 15:55:48.967275   36162 buildroot.go:166] provisioning hostname "ha-674765"
	I0625 15:55:48.967301   36162 main.go:141] libmachine: (ha-674765) Calling .GetMachineName
	I0625 15:55:48.967499   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:48.969799   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.970086   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:48.970127   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.970284   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:48.970446   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:48.970616   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:48.970726   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:48.970871   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:55:48.971070   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 15:55:48.971084   36162 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-674765 && echo "ha-674765" | sudo tee /etc/hostname
	I0625 15:55:49.092063   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-674765
	
	I0625 15:55:49.092088   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:49.094515   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.095167   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:49.095194   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.095608   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:49.095807   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:49.095962   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:49.096058   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:49.096270   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:55:49.096433   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 15:55:49.096449   36162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-674765' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-674765/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-674765' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 15:55:49.210753   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 15:55:49.210781   36162 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 15:55:49.210818   36162 buildroot.go:174] setting up certificates
	I0625 15:55:49.210834   36162 provision.go:84] configureAuth start
	I0625 15:55:49.210860   36162 main.go:141] libmachine: (ha-674765) Calling .GetMachineName
	I0625 15:55:49.211116   36162 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 15:55:49.213411   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.213698   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:49.213726   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.213825   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:49.215829   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.216199   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:49.216226   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.216377   36162 provision.go:143] copyHostCerts
	I0625 15:55:49.216405   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 15:55:49.216447   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem, removing ...
	I0625 15:55:49.216456   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 15:55:49.216513   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 15:55:49.216590   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 15:55:49.216607   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem, removing ...
	I0625 15:55:49.216613   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 15:55:49.216641   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 15:55:49.216693   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 15:55:49.216708   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem, removing ...
	I0625 15:55:49.216714   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 15:55:49.216733   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 15:55:49.216789   36162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.ha-674765 san=[127.0.0.1 192.168.39.128 ha-674765 localhost minikube]
	I0625 15:55:49.483969   36162 provision.go:177] copyRemoteCerts
	I0625 15:55:49.484017   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 15:55:49.484037   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:49.486572   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.486879   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:49.486908   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.487050   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:49.487215   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:49.487366   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:49.487461   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:55:49.572233   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0625 15:55:49.572290   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 15:55:49.595865   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0625 15:55:49.595923   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0625 15:55:49.618380   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0625 15:55:49.618431   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0625 15:55:49.640840   36162 provision.go:87] duration metric: took 429.993244ms to configureAuth
	I0625 15:55:49.640859   36162 buildroot.go:189] setting minikube options for container-runtime
	I0625 15:55:49.641037   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:55:49.641163   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:49.643407   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.643711   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:49.643740   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.643940   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:49.644183   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:49.644344   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:49.644447   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:49.644601   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:55:49.644751   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 15:55:49.644767   36162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 15:55:49.901508   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 15:55:49.901537   36162 main.go:141] libmachine: Checking connection to Docker...
	I0625 15:55:49.901549   36162 main.go:141] libmachine: (ha-674765) Calling .GetURL
	I0625 15:55:49.902994   36162 main.go:141] libmachine: (ha-674765) DBG | Using libvirt version 6000000
	I0625 15:55:49.905144   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.905442   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:49.905463   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.905614   36162 main.go:141] libmachine: Docker is up and running!
	I0625 15:55:49.905635   36162 main.go:141] libmachine: Reticulating splines...
	I0625 15:55:49.905641   36162 client.go:171] duration metric: took 25.155932528s to LocalClient.Create
	I0625 15:55:49.905658   36162 start.go:167] duration metric: took 25.15598501s to libmachine.API.Create "ha-674765"
	I0625 15:55:49.905668   36162 start.go:293] postStartSetup for "ha-674765" (driver="kvm2")
	I0625 15:55:49.905676   36162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 15:55:49.905691   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:49.905900   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 15:55:49.905925   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:49.907752   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.908050   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:49.908082   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.908190   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:49.908355   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:49.908493   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:49.908623   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:55:49.992762   36162 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 15:55:49.996757   36162 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 15:55:49.996775   36162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 15:55:49.996826   36162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 15:55:49.996903   36162 filesync.go:149] local asset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> 212392.pem in /etc/ssl/certs
	I0625 15:55:49.996913   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /etc/ssl/certs/212392.pem
	I0625 15:55:49.996999   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0625 15:55:50.006422   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /etc/ssl/certs/212392.pem (1708 bytes)
	I0625 15:55:50.029248   36162 start.go:296] duration metric: took 123.570932ms for postStartSetup
	I0625 15:55:50.029287   36162 main.go:141] libmachine: (ha-674765) Calling .GetConfigRaw
	I0625 15:55:50.029897   36162 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 15:55:50.032220   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.032534   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:50.032570   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.032767   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:55:50.032948   36162 start.go:128] duration metric: took 25.300618567s to createHost
	I0625 15:55:50.032967   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:50.034984   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.035267   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:50.035305   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.035424   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:50.035597   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:50.035746   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:50.035866   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:50.036010   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:55:50.036155   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 15:55:50.036168   36162 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0625 15:55:50.142867   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719330950.113203034
	
	I0625 15:55:50.142890   36162 fix.go:216] guest clock: 1719330950.113203034
	I0625 15:55:50.142896   36162 fix.go:229] Guest: 2024-06-25 15:55:50.113203034 +0000 UTC Remote: 2024-06-25 15:55:50.032959072 +0000 UTC m=+25.400781994 (delta=80.243962ms)
	I0625 15:55:50.142916   36162 fix.go:200] guest clock delta is within tolerance: 80.243962ms
	I0625 15:55:50.142922   36162 start.go:83] releasing machines lock for "ha-674765", held for 25.410670041s
	I0625 15:55:50.142946   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:50.143188   36162 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 15:55:50.145581   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.145896   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:50.145924   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.146053   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:50.146576   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:50.146741   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:50.146792   36162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 15:55:50.146843   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:50.146956   36162 ssh_runner.go:195] Run: cat /version.json
	I0625 15:55:50.146973   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:50.149378   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.149515   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.149676   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:50.149694   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.149849   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:50.149921   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:50.149954   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.149994   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:50.150139   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:50.150173   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:50.150273   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:55:50.150326   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:50.150458   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:50.150592   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:55:50.227474   36162 ssh_runner.go:195] Run: systemctl --version
	I0625 15:55:50.250228   36162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 15:55:50.409021   36162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0625 15:55:50.415168   36162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 15:55:50.415220   36162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 15:55:50.434896   36162 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0625 15:55:50.434910   36162 start.go:494] detecting cgroup driver to use...
	I0625 15:55:50.434948   36162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 15:55:50.455185   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 15:55:50.471242   36162 docker.go:217] disabling cri-docker service (if available) ...
	I0625 15:55:50.471279   36162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 15:55:50.484823   36162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 15:55:50.499278   36162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 15:55:50.617798   36162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 15:55:50.762365   36162 docker.go:233] disabling docker service ...
	I0625 15:55:50.762423   36162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 15:55:50.777064   36162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 15:55:50.790038   36162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 15:55:50.917709   36162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 15:55:51.024372   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 15:55:51.038561   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 15:55:51.056392   36162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0625 15:55:51.056450   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:55:51.066822   36162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 15:55:51.066864   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:55:51.077158   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:55:51.087212   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:55:51.097401   36162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 15:55:51.107728   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:55:51.117862   36162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:55:51.134067   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:55:51.144255   36162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 15:55:51.153405   36162 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0625 15:55:51.153467   36162 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0625 15:55:51.165743   36162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 15:55:51.174905   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:55:51.278267   36162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 15:55:51.415511   36162 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 15:55:51.415587   36162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 15:55:51.420302   36162 start.go:562] Will wait 60s for crictl version
	I0625 15:55:51.420365   36162 ssh_runner.go:195] Run: which crictl
	I0625 15:55:51.424005   36162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 15:55:51.461545   36162 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 15:55:51.461604   36162 ssh_runner.go:195] Run: crio --version
	I0625 15:55:51.488841   36162 ssh_runner.go:195] Run: crio --version
	I0625 15:55:51.518881   36162 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0625 15:55:51.520141   36162 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 15:55:51.522528   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:51.522845   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:51.522865   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:51.523098   36162 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0625 15:55:51.527146   36162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 15:55:51.540086   36162 kubeadm.go:877] updating cluster {Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0625 15:55:51.540176   36162 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 15:55:51.540212   36162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 15:55:51.572747   36162 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0625 15:55:51.572795   36162 ssh_runner.go:195] Run: which lz4
	I0625 15:55:51.576575   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0625 15:55:51.576668   36162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0625 15:55:51.580841   36162 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0625 15:55:51.580862   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0625 15:55:52.957341   36162 crio.go:462] duration metric: took 1.380702907s to copy over tarball
	I0625 15:55:52.957422   36162 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0625 15:55:54.998908   36162 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.041453222s)
	I0625 15:55:54.998937   36162 crio.go:469] duration metric: took 2.041574258s to extract the tarball
	I0625 15:55:54.998944   36162 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0625 15:55:55.036762   36162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 15:55:55.081347   36162 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 15:55:55.081367   36162 cache_images.go:84] Images are preloaded, skipping loading
	I0625 15:55:55.081373   36162 kubeadm.go:928] updating node { 192.168.39.128 8443 v1.30.2 crio true true} ...
	I0625 15:55:55.081470   36162 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-674765 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 15:55:55.081530   36162 ssh_runner.go:195] Run: crio config
	I0625 15:55:55.126079   36162 cni.go:84] Creating CNI manager for ""
	I0625 15:55:55.126096   36162 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0625 15:55:55.126104   36162 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0625 15:55:55.126123   36162 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-674765 NodeName:ha-674765 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0625 15:55:55.126238   36162 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-674765"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0625 15:55:55.126259   36162 kube-vip.go:115] generating kube-vip config ...
	I0625 15:55:55.126302   36162 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0625 15:55:55.143906   36162 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0625 15:55:55.143999   36162 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0625 15:55:55.144047   36162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0625 15:55:55.153974   36162 binaries.go:44] Found k8s binaries, skipping transfer
	I0625 15:55:55.154040   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0625 15:55:55.163602   36162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0625 15:55:55.179582   36162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 15:55:55.195114   36162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0625 15:55:55.210668   36162 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0625 15:55:55.226274   36162 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0625 15:55:55.229838   36162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 15:55:55.241546   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:55:55.345411   36162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 15:55:55.361122   36162 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765 for IP: 192.168.39.128
	I0625 15:55:55.361147   36162 certs.go:194] generating shared ca certs ...
	I0625 15:55:55.361166   36162 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:55.361339   36162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 15:55:55.361428   36162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 15:55:55.361447   36162 certs.go:256] generating profile certs ...
	I0625 15:55:55.361516   36162 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key
	I0625 15:55:55.361534   36162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.crt with IP's: []
	I0625 15:55:55.481396   36162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.crt ...
	I0625 15:55:55.481423   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.crt: {Name:mk634c6de4b44b2ccd54b0092cddfbae0f8e98b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:55.481599   36162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key ...
	I0625 15:55:55.481614   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key: {Name:mk4d2d01e3f027181db556966898190cb645a4de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:55.481711   36162 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.222299a4
	I0625 15:55:55.481731   36162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.222299a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.254]
	I0625 15:55:55.692389   36162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.222299a4 ...
	I0625 15:55:55.692417   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.222299a4: {Name:mkc1cda21cad476115bb27b306008e1b17c2836a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:55.692580   36162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.222299a4 ...
	I0625 15:55:55.692596   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.222299a4: {Name:mk91e0f955e3f071068275bc216d2a474b5df152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:55.692690   36162 certs.go:381] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.222299a4 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt
	I0625 15:55:55.692777   36162 certs.go:385] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.222299a4 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key
	I0625 15:55:55.692854   36162 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key
	I0625 15:55:55.692874   36162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt with IP's: []
	I0625 15:55:55.894014   36162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt ...
	I0625 15:55:55.894043   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt: {Name:mk73ccd38d492e2b2476dc85013c84204bb41e27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:55.894211   36162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key ...
	I0625 15:55:55.894225   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key: {Name:mkd5e59badd38772aa6667a35929b726353b412d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:55.894317   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0625 15:55:55.894338   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0625 15:55:55.894353   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0625 15:55:55.894369   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0625 15:55:55.894388   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0625 15:55:55.894404   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0625 15:55:55.894421   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0625 15:55:55.894441   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0625 15:55:55.894519   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem (1338 bytes)
	W0625 15:55:55.894573   36162 certs.go:480] ignoring /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239_empty.pem, impossibly tiny 0 bytes
	I0625 15:55:55.894595   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 15:55:55.894633   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 15:55:55.894665   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 15:55:55.894700   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 15:55:55.894753   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem (1708 bytes)
	I0625 15:55:55.894790   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem -> /usr/share/ca-certificates/21239.pem
	I0625 15:55:55.894810   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /usr/share/ca-certificates/212392.pem
	I0625 15:55:55.894828   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:55:55.895449   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 15:55:55.920997   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 15:55:55.943502   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 15:55:55.966165   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 15:55:55.989049   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0625 15:55:56.011606   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0625 15:55:56.034379   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 15:55:56.056631   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0625 15:55:56.078948   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem --> /usr/share/ca-certificates/21239.pem (1338 bytes)
	I0625 15:55:56.101031   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /usr/share/ca-certificates/212392.pem (1708 bytes)
	I0625 15:55:56.123517   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 15:55:56.154563   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0625 15:55:56.171469   36162 ssh_runner.go:195] Run: openssl version
	I0625 15:55:56.177275   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 15:55:56.187653   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:55:56.192219   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:55:56.192267   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:55:56.201111   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 15:55:56.211415   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21239.pem && ln -fs /usr/share/ca-certificates/21239.pem /etc/ssl/certs/21239.pem"
	I0625 15:55:56.221456   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21239.pem
	I0625 15:55:56.225783   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 15:55:56.225813   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21239.pem
	I0625 15:55:56.231245   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21239.pem /etc/ssl/certs/51391683.0"
	I0625 15:55:56.241405   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212392.pem && ln -fs /usr/share/ca-certificates/212392.pem /etc/ssl/certs/212392.pem"
	I0625 15:55:56.251823   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212392.pem
	I0625 15:55:56.256042   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 15:55:56.256085   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212392.pem
	I0625 15:55:56.261335   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/212392.pem /etc/ssl/certs/3ec20f2e.0"
	I0625 15:55:56.271455   36162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 15:55:56.275322   36162 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0625 15:55:56.275368   36162 kubeadm.go:391] StartCluster: {Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 15:55:56.275437   36162 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0625 15:55:56.275490   36162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0625 15:55:56.312272   36162 cri.go:89] found id: ""
	I0625 15:55:56.312349   36162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0625 15:55:56.321955   36162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0625 15:55:56.331073   36162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0625 15:55:56.340161   36162 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0625 15:55:56.340174   36162 kubeadm.go:156] found existing configuration files:
	
	I0625 15:55:56.340211   36162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0625 15:55:56.348919   36162 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0625 15:55:56.348954   36162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0625 15:55:56.357980   36162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0625 15:55:56.366690   36162 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0625 15:55:56.366722   36162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0625 15:55:56.375527   36162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0625 15:55:56.384050   36162 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0625 15:55:56.384092   36162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0625 15:55:56.392784   36162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0625 15:55:56.401224   36162 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0625 15:55:56.401253   36162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0625 15:55:56.410192   36162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0625 15:55:56.630938   36162 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0625 15:56:07.250800   36162 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0625 15:56:07.250886   36162 kubeadm.go:309] [preflight] Running pre-flight checks
	I0625 15:56:07.250948   36162 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0625 15:56:07.251032   36162 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0625 15:56:07.251166   36162 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0625 15:56:07.251289   36162 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0625 15:56:07.252642   36162 out.go:204]   - Generating certificates and keys ...
	I0625 15:56:07.252707   36162 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0625 15:56:07.252763   36162 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0625 15:56:07.252817   36162 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0625 15:56:07.252874   36162 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0625 15:56:07.252926   36162 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0625 15:56:07.252969   36162 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0625 15:56:07.253011   36162 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0625 15:56:07.253102   36162 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-674765 localhost] and IPs [192.168.39.128 127.0.0.1 ::1]
	I0625 15:56:07.253144   36162 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0625 15:56:07.253287   36162 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-674765 localhost] and IPs [192.168.39.128 127.0.0.1 ::1]
	I0625 15:56:07.253398   36162 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0625 15:56:07.253497   36162 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0625 15:56:07.253566   36162 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0625 15:56:07.253661   36162 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0625 15:56:07.253710   36162 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0625 15:56:07.253755   36162 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0625 15:56:07.253800   36162 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0625 15:56:07.253881   36162 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0625 15:56:07.253968   36162 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0625 15:56:07.254082   36162 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0625 15:56:07.254144   36162 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0625 15:56:07.255497   36162 out.go:204]   - Booting up control plane ...
	I0625 15:56:07.255581   36162 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0625 15:56:07.255668   36162 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0625 15:56:07.255754   36162 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0625 15:56:07.255866   36162 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0625 15:56:07.255984   36162 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0625 15:56:07.256035   36162 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0625 15:56:07.256187   36162 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0625 15:56:07.256253   36162 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0625 15:56:07.256342   36162 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.414724ms
	I0625 15:56:07.256437   36162 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0625 15:56:07.256518   36162 kubeadm.go:309] [api-check] The API server is healthy after 6.136500068s
	I0625 15:56:07.256635   36162 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0625 15:56:07.256775   36162 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0625 15:56:07.256860   36162 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0625 15:56:07.257066   36162 kubeadm.go:309] [mark-control-plane] Marking the node ha-674765 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0625 15:56:07.257148   36162 kubeadm.go:309] [bootstrap-token] Using token: fawvb8.q5jg5dbcsoua7fro
	I0625 15:56:07.258304   36162 out.go:204]   - Configuring RBAC rules ...
	I0625 15:56:07.258405   36162 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0625 15:56:07.258498   36162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0625 15:56:07.258620   36162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0625 15:56:07.258764   36162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0625 15:56:07.258902   36162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0625 15:56:07.258983   36162 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0625 15:56:07.259084   36162 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0625 15:56:07.259127   36162 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0625 15:56:07.259175   36162 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0625 15:56:07.259182   36162 kubeadm.go:309] 
	I0625 15:56:07.259247   36162 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0625 15:56:07.259265   36162 kubeadm.go:309] 
	I0625 15:56:07.259322   36162 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0625 15:56:07.259328   36162 kubeadm.go:309] 
	I0625 15:56:07.259355   36162 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0625 15:56:07.259404   36162 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0625 15:56:07.259444   36162 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0625 15:56:07.259453   36162 kubeadm.go:309] 
	I0625 15:56:07.259509   36162 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0625 15:56:07.259517   36162 kubeadm.go:309] 
	I0625 15:56:07.259556   36162 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0625 15:56:07.259562   36162 kubeadm.go:309] 
	I0625 15:56:07.259628   36162 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0625 15:56:07.259724   36162 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0625 15:56:07.259818   36162 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0625 15:56:07.259827   36162 kubeadm.go:309] 
	I0625 15:56:07.259924   36162 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0625 15:56:07.260029   36162 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0625 15:56:07.260038   36162 kubeadm.go:309] 
	I0625 15:56:07.260129   36162 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fawvb8.q5jg5dbcsoua7fro \
	I0625 15:56:07.260247   36162 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:df4523a4334c80aff4a7c2fc7b4a73691744a675a28cdb3d4468287f693ab03d \
	I0625 15:56:07.260276   36162 kubeadm.go:309] 	--control-plane 
	I0625 15:56:07.260285   36162 kubeadm.go:309] 
	I0625 15:56:07.260383   36162 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0625 15:56:07.260393   36162 kubeadm.go:309] 
	I0625 15:56:07.260490   36162 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fawvb8.q5jg5dbcsoua7fro \
	I0625 15:56:07.260653   36162 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:df4523a4334c80aff4a7c2fc7b4a73691744a675a28cdb3d4468287f693ab03d 
	I0625 15:56:07.260669   36162 cni.go:84] Creating CNI manager for ""
	I0625 15:56:07.260676   36162 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0625 15:56:07.261939   36162 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0625 15:56:07.262963   36162 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0625 15:56:07.268838   36162 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0625 15:56:07.268854   36162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0625 15:56:07.288846   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0625 15:56:07.635446   36162 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0625 15:56:07.635529   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:07.635533   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-674765 minikube.k8s.io/updated_at=2024_06_25T15_56_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b minikube.k8s.io/name=ha-674765 minikube.k8s.io/primary=true
	I0625 15:56:07.838524   36162 ops.go:34] apiserver oom_adj: -16
	I0625 15:56:07.838594   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:08.339101   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:08.838997   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:09.338626   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:09.839575   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:10.339604   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:10.839529   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:11.339182   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:11.839203   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:12.338597   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:12.839579   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:13.339019   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:13.839408   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:14.339441   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:14.839398   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:15.338795   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:15.839652   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:16.339589   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:16.839361   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:17.338701   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:17.839196   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:18.339345   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:18.455382   36162 kubeadm.go:1107] duration metric: took 10.819926294s to wait for elevateKubeSystemPrivileges
	W0625 15:56:18.455428   36162 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0625 15:56:18.455438   36162 kubeadm.go:393] duration metric: took 22.180073428s to StartCluster
	I0625 15:56:18.455457   36162 settings.go:142] acquiring lock: {Name:mk38d7db80b40da56857d65b8e7da05700cdb9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:56:18.455531   36162 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:56:18.456169   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/kubeconfig: {Name:mk71a37176bd7deadd1f1cd3c756fe56f3b0810d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:56:18.456356   36162 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:56:18.456387   36162 start.go:240] waiting for startup goroutines ...
	I0625 15:56:18.456363   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0625 15:56:18.456394   36162 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0625 15:56:18.456490   36162 addons.go:69] Setting storage-provisioner=true in profile "ha-674765"
	I0625 15:56:18.456515   36162 addons.go:69] Setting default-storageclass=true in profile "ha-674765"
	I0625 15:56:18.456549   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:56:18.456568   36162 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-674765"
	I0625 15:56:18.456519   36162 addons.go:234] Setting addon storage-provisioner=true in "ha-674765"
	I0625 15:56:18.456625   36162 host.go:66] Checking if "ha-674765" exists ...
	I0625 15:56:18.456973   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:18.456985   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:18.456999   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:18.457006   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:18.471583   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40779
	I0625 15:56:18.471871   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37905
	I0625 15:56:18.472124   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:18.472338   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:18.472619   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:18.472642   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:18.472791   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:18.472810   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:18.472957   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:18.473060   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:18.473190   36162 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 15:56:18.473505   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:18.473537   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:18.475310   36162 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:56:18.475528   36162 kapi.go:59] client config for ha-674765: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.crt", KeyFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key", CAFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0625 15:56:18.475961   36162 cert_rotation.go:137] Starting client certificate rotation controller
	I0625 15:56:18.476120   36162 addons.go:234] Setting addon default-storageclass=true in "ha-674765"
	I0625 15:56:18.476149   36162 host.go:66] Checking if "ha-674765" exists ...
	I0625 15:56:18.476379   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:18.476415   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:18.488078   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39921
	I0625 15:56:18.488555   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:18.489023   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:18.489048   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:18.489415   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:18.489610   36162 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 15:56:18.489779   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42229
	I0625 15:56:18.490212   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:18.490711   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:18.490733   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:18.491094   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:18.491359   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:56:18.491665   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:18.491725   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:18.493423   36162 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0625 15:56:18.494662   36162 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0625 15:56:18.494680   36162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0625 15:56:18.494696   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:56:18.497391   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:18.497771   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:56:18.497791   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:18.497945   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:56:18.498107   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:56:18.498223   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:56:18.498345   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:56:18.505779   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
	I0625 15:56:18.506135   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:18.508727   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:18.508755   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:18.509106   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:18.509286   36162 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 15:56:18.510835   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:56:18.511037   36162 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0625 15:56:18.511051   36162 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0625 15:56:18.511063   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:56:18.513438   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:18.513770   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:56:18.513793   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:18.514023   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:56:18.514201   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:56:18.514350   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:56:18.514513   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:56:18.576559   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0625 15:56:18.647420   36162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0625 15:56:18.677361   36162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0625 15:56:18.958492   36162 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0625 15:56:19.177252   36162 main.go:141] libmachine: Making call to close driver server
	I0625 15:56:19.177278   36162 main.go:141] libmachine: (ha-674765) Calling .Close
	I0625 15:56:19.177253   36162 main.go:141] libmachine: Making call to close driver server
	I0625 15:56:19.177344   36162 main.go:141] libmachine: (ha-674765) Calling .Close
	I0625 15:56:19.177546   36162 main.go:141] libmachine: (ha-674765) DBG | Closing plugin on server side
	I0625 15:56:19.177583   36162 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:56:19.177588   36162 main.go:141] libmachine: (ha-674765) DBG | Closing plugin on server side
	I0625 15:56:19.177596   36162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:56:19.177607   36162 main.go:141] libmachine: Making call to close driver server
	I0625 15:56:19.177616   36162 main.go:141] libmachine: (ha-674765) Calling .Close
	I0625 15:56:19.177687   36162 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:56:19.177701   36162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:56:19.177714   36162 main.go:141] libmachine: Making call to close driver server
	I0625 15:56:19.177724   36162 main.go:141] libmachine: (ha-674765) Calling .Close
	I0625 15:56:19.177923   36162 main.go:141] libmachine: (ha-674765) DBG | Closing plugin on server side
	I0625 15:56:19.177933   36162 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:56:19.177945   36162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:56:19.177950   36162 main.go:141] libmachine: (ha-674765) DBG | Closing plugin on server side
	I0625 15:56:19.177981   36162 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:56:19.178003   36162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:56:19.178106   36162 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0625 15:56:19.178119   36162 round_trippers.go:469] Request Headers:
	I0625 15:56:19.178130   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:56:19.178135   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:56:19.188599   36162 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0625 15:56:19.189264   36162 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0625 15:56:19.189282   36162 round_trippers.go:469] Request Headers:
	I0625 15:56:19.189294   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:56:19.189303   36162 round_trippers.go:473]     Content-Type: application/json
	I0625 15:56:19.189307   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:56:19.198099   36162 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0625 15:56:19.198461   36162 main.go:141] libmachine: Making call to close driver server
	I0625 15:56:19.198493   36162 main.go:141] libmachine: (ha-674765) Calling .Close
	I0625 15:56:19.198709   36162 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:56:19.198725   36162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:56:19.200299   36162 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0625 15:56:19.201450   36162 addons.go:510] duration metric: took 745.059817ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0625 15:56:19.201475   36162 start.go:245] waiting for cluster config update ...
	I0625 15:56:19.201485   36162 start.go:254] writing updated cluster config ...
	I0625 15:56:19.202840   36162 out.go:177] 
	I0625 15:56:19.204101   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:56:19.204186   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:56:19.205733   36162 out.go:177] * Starting "ha-674765-m02" control-plane node in "ha-674765" cluster
	I0625 15:56:19.206970   36162 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 15:56:19.206988   36162 cache.go:56] Caching tarball of preloaded images
	I0625 15:56:19.207057   36162 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 15:56:19.207068   36162 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0625 15:56:19.207125   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:56:19.207261   36162 start.go:360] acquireMachinesLock for ha-674765-m02: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 15:56:19.207296   36162 start.go:364] duration metric: took 19.689µs to acquireMachinesLock for "ha-674765-m02"
	I0625 15:56:19.207312   36162 start.go:93] Provisioning new machine with config: &{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:56:19.207375   36162 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0625 15:56:19.208756   36162 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0625 15:56:19.208812   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:19.208833   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:19.222743   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I0625 15:56:19.223095   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:19.223522   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:19.223544   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:19.223907   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:19.224089   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetMachineName
	I0625 15:56:19.224247   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:19.224390   36162 start.go:159] libmachine.API.Create for "ha-674765" (driver="kvm2")
	I0625 15:56:19.224411   36162 client.go:168] LocalClient.Create starting
	I0625 15:56:19.224444   36162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem
	I0625 15:56:19.224483   36162 main.go:141] libmachine: Decoding PEM data...
	I0625 15:56:19.224510   36162 main.go:141] libmachine: Parsing certificate...
	I0625 15:56:19.224575   36162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem
	I0625 15:56:19.224602   36162 main.go:141] libmachine: Decoding PEM data...
	I0625 15:56:19.224618   36162 main.go:141] libmachine: Parsing certificate...
	I0625 15:56:19.224643   36162 main.go:141] libmachine: Running pre-create checks...
	I0625 15:56:19.224655   36162 main.go:141] libmachine: (ha-674765-m02) Calling .PreCreateCheck
	I0625 15:56:19.224859   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetConfigRaw
	I0625 15:56:19.225299   36162 main.go:141] libmachine: Creating machine...
	I0625 15:56:19.225327   36162 main.go:141] libmachine: (ha-674765-m02) Calling .Create
	I0625 15:56:19.225446   36162 main.go:141] libmachine: (ha-674765-m02) Creating KVM machine...
	I0625 15:56:19.226578   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found existing default KVM network
	I0625 15:56:19.226766   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found existing private KVM network mk-ha-674765
	I0625 15:56:19.226902   36162 main.go:141] libmachine: (ha-674765-m02) Setting up store path in /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02 ...
	I0625 15:56:19.226926   36162 main.go:141] libmachine: (ha-674765-m02) Building disk image from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso
	I0625 15:56:19.226976   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:19.226876   36561 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:56:19.227083   36162 main.go:141] libmachine: (ha-674765-m02) Downloading /home/jenkins/minikube-integration/19128-13846/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso...
	I0625 15:56:19.447297   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:19.447171   36561 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa...
	I0625 15:56:19.975551   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:19.975447   36561 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/ha-674765-m02.rawdisk...
	I0625 15:56:19.975577   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Writing magic tar header
	I0625 15:56:19.975587   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Writing SSH key tar header
	I0625 15:56:19.975594   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:19.975564   36561 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02 ...
	I0625 15:56:19.975697   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02
	I0625 15:56:19.975737   36162 main.go:141] libmachine: (ha-674765-m02) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02 (perms=drwx------)
	I0625 15:56:19.975753   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines
	I0625 15:56:19.975782   36162 main.go:141] libmachine: (ha-674765-m02) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines (perms=drwxr-xr-x)
	I0625 15:56:19.975805   36162 main.go:141] libmachine: (ha-674765-m02) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube (perms=drwxr-xr-x)
	I0625 15:56:19.975817   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:56:19.975831   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846
	I0625 15:56:19.975845   36162 main.go:141] libmachine: (ha-674765-m02) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846 (perms=drwxrwxr-x)
	I0625 15:56:19.975857   36162 main.go:141] libmachine: (ha-674765-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0625 15:56:19.975869   36162 main.go:141] libmachine: (ha-674765-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0625 15:56:19.975881   36162 main.go:141] libmachine: (ha-674765-m02) Creating domain...
	I0625 15:56:19.975896   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0625 15:56:19.975908   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Checking permissions on dir: /home/jenkins
	I0625 15:56:19.975921   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Checking permissions on dir: /home
	I0625 15:56:19.975932   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Skipping /home - not owner
	I0625 15:56:19.976781   36162 main.go:141] libmachine: (ha-674765-m02) define libvirt domain using xml: 
	I0625 15:56:19.976802   36162 main.go:141] libmachine: (ha-674765-m02) <domain type='kvm'>
	I0625 15:56:19.976811   36162 main.go:141] libmachine: (ha-674765-m02)   <name>ha-674765-m02</name>
	I0625 15:56:19.976821   36162 main.go:141] libmachine: (ha-674765-m02)   <memory unit='MiB'>2200</memory>
	I0625 15:56:19.976830   36162 main.go:141] libmachine: (ha-674765-m02)   <vcpu>2</vcpu>
	I0625 15:56:19.976841   36162 main.go:141] libmachine: (ha-674765-m02)   <features>
	I0625 15:56:19.976850   36162 main.go:141] libmachine: (ha-674765-m02)     <acpi/>
	I0625 15:56:19.976858   36162 main.go:141] libmachine: (ha-674765-m02)     <apic/>
	I0625 15:56:19.976870   36162 main.go:141] libmachine: (ha-674765-m02)     <pae/>
	I0625 15:56:19.976877   36162 main.go:141] libmachine: (ha-674765-m02)     
	I0625 15:56:19.976888   36162 main.go:141] libmachine: (ha-674765-m02)   </features>
	I0625 15:56:19.976904   36162 main.go:141] libmachine: (ha-674765-m02)   <cpu mode='host-passthrough'>
	I0625 15:56:19.976915   36162 main.go:141] libmachine: (ha-674765-m02)   
	I0625 15:56:19.976926   36162 main.go:141] libmachine: (ha-674765-m02)   </cpu>
	I0625 15:56:19.976938   36162 main.go:141] libmachine: (ha-674765-m02)   <os>
	I0625 15:56:19.976948   36162 main.go:141] libmachine: (ha-674765-m02)     <type>hvm</type>
	I0625 15:56:19.976960   36162 main.go:141] libmachine: (ha-674765-m02)     <boot dev='cdrom'/>
	I0625 15:56:19.976976   36162 main.go:141] libmachine: (ha-674765-m02)     <boot dev='hd'/>
	I0625 15:56:19.976989   36162 main.go:141] libmachine: (ha-674765-m02)     <bootmenu enable='no'/>
	I0625 15:56:19.976999   36162 main.go:141] libmachine: (ha-674765-m02)   </os>
	I0625 15:56:19.977010   36162 main.go:141] libmachine: (ha-674765-m02)   <devices>
	I0625 15:56:19.977022   36162 main.go:141] libmachine: (ha-674765-m02)     <disk type='file' device='cdrom'>
	I0625 15:56:19.977039   36162 main.go:141] libmachine: (ha-674765-m02)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/boot2docker.iso'/>
	I0625 15:56:19.977059   36162 main.go:141] libmachine: (ha-674765-m02)       <target dev='hdc' bus='scsi'/>
	I0625 15:56:19.977071   36162 main.go:141] libmachine: (ha-674765-m02)       <readonly/>
	I0625 15:56:19.977081   36162 main.go:141] libmachine: (ha-674765-m02)     </disk>
	I0625 15:56:19.977095   36162 main.go:141] libmachine: (ha-674765-m02)     <disk type='file' device='disk'>
	I0625 15:56:19.977112   36162 main.go:141] libmachine: (ha-674765-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0625 15:56:19.977138   36162 main.go:141] libmachine: (ha-674765-m02)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/ha-674765-m02.rawdisk'/>
	I0625 15:56:19.977156   36162 main.go:141] libmachine: (ha-674765-m02)       <target dev='hda' bus='virtio'/>
	I0625 15:56:19.977166   36162 main.go:141] libmachine: (ha-674765-m02)     </disk>
	I0625 15:56:19.977191   36162 main.go:141] libmachine: (ha-674765-m02)     <interface type='network'>
	I0625 15:56:19.977204   36162 main.go:141] libmachine: (ha-674765-m02)       <source network='mk-ha-674765'/>
	I0625 15:56:19.977214   36162 main.go:141] libmachine: (ha-674765-m02)       <model type='virtio'/>
	I0625 15:56:19.977225   36162 main.go:141] libmachine: (ha-674765-m02)     </interface>
	I0625 15:56:19.977236   36162 main.go:141] libmachine: (ha-674765-m02)     <interface type='network'>
	I0625 15:56:19.977247   36162 main.go:141] libmachine: (ha-674765-m02)       <source network='default'/>
	I0625 15:56:19.977261   36162 main.go:141] libmachine: (ha-674765-m02)       <model type='virtio'/>
	I0625 15:56:19.977273   36162 main.go:141] libmachine: (ha-674765-m02)     </interface>
	I0625 15:56:19.977284   36162 main.go:141] libmachine: (ha-674765-m02)     <serial type='pty'>
	I0625 15:56:19.977296   36162 main.go:141] libmachine: (ha-674765-m02)       <target port='0'/>
	I0625 15:56:19.977305   36162 main.go:141] libmachine: (ha-674765-m02)     </serial>
	I0625 15:56:19.977321   36162 main.go:141] libmachine: (ha-674765-m02)     <console type='pty'>
	I0625 15:56:19.977343   36162 main.go:141] libmachine: (ha-674765-m02)       <target type='serial' port='0'/>
	I0625 15:56:19.977357   36162 main.go:141] libmachine: (ha-674765-m02)     </console>
	I0625 15:56:19.977371   36162 main.go:141] libmachine: (ha-674765-m02)     <rng model='virtio'>
	I0625 15:56:19.977383   36162 main.go:141] libmachine: (ha-674765-m02)       <backend model='random'>/dev/random</backend>
	I0625 15:56:19.977394   36162 main.go:141] libmachine: (ha-674765-m02)     </rng>
	I0625 15:56:19.977403   36162 main.go:141] libmachine: (ha-674765-m02)     
	I0625 15:56:19.977414   36162 main.go:141] libmachine: (ha-674765-m02)     
	I0625 15:56:19.977423   36162 main.go:141] libmachine: (ha-674765-m02)   </devices>
	I0625 15:56:19.977432   36162 main.go:141] libmachine: (ha-674765-m02) </domain>
	I0625 15:56:19.977447   36162 main.go:141] libmachine: (ha-674765-m02) 
	I0625 15:56:19.984916   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:d6:eb:ee in network default
	I0625 15:56:19.985488   36162 main.go:141] libmachine: (ha-674765-m02) Ensuring networks are active...
	I0625 15:56:19.985506   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:19.986270   36162 main.go:141] libmachine: (ha-674765-m02) Ensuring network default is active
	I0625 15:56:19.986621   36162 main.go:141] libmachine: (ha-674765-m02) Ensuring network mk-ha-674765 is active
	I0625 15:56:19.987198   36162 main.go:141] libmachine: (ha-674765-m02) Getting domain xml...
	I0625 15:56:19.987966   36162 main.go:141] libmachine: (ha-674765-m02) Creating domain...
	I0625 15:56:21.179303   36162 main.go:141] libmachine: (ha-674765-m02) Waiting to get IP...
	I0625 15:56:21.180185   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:21.180587   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:21.180639   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:21.180580   36561 retry.go:31] will retry after 282.650658ms: waiting for machine to come up
	I0625 15:56:21.465057   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:21.465535   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:21.465566   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:21.465511   36561 retry.go:31] will retry after 336.945771ms: waiting for machine to come up
	I0625 15:56:21.803843   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:21.804361   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:21.804394   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:21.804310   36561 retry.go:31] will retry after 387.860578ms: waiting for machine to come up
	I0625 15:56:22.193809   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:22.194306   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:22.194337   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:22.194269   36561 retry.go:31] will retry after 505.4586ms: waiting for machine to come up
	I0625 15:56:22.701076   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:22.701551   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:22.701579   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:22.701503   36561 retry.go:31] will retry after 747.446006ms: waiting for machine to come up
	I0625 15:56:23.449951   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:23.450415   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:23.450441   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:23.450342   36561 retry.go:31] will retry after 613.447951ms: waiting for machine to come up
	I0625 15:56:24.064836   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:24.065296   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:24.065313   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:24.065262   36561 retry.go:31] will retry after 903.605792ms: waiting for machine to come up
	I0625 15:56:24.971237   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:24.971676   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:24.971701   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:24.971635   36561 retry.go:31] will retry after 1.047838265s: waiting for machine to come up
	I0625 15:56:26.020788   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:26.021179   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:26.021206   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:26.021135   36561 retry.go:31] will retry after 1.430529445s: waiting for machine to come up
	I0625 15:56:27.453560   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:27.453922   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:27.453946   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:27.453874   36561 retry.go:31] will retry after 2.175772528s: waiting for machine to come up
	I0625 15:56:29.631331   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:29.631893   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:29.631918   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:29.631847   36561 retry.go:31] will retry after 1.836171852s: waiting for machine to come up
	I0625 15:56:31.469626   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:31.470037   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:31.470086   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:31.470020   36561 retry.go:31] will retry after 2.361454491s: waiting for machine to come up
	I0625 15:56:33.834350   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:33.834856   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:33.834879   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:33.834813   36561 retry.go:31] will retry after 4.478470724s: waiting for machine to come up
	I0625 15:56:38.316527   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:38.316937   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:38.316963   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:38.316900   36561 retry.go:31] will retry after 5.11600979s: waiting for machine to come up
	I0625 15:56:43.435616   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.436057   36162 main.go:141] libmachine: (ha-674765-m02) Found IP for machine: 192.168.39.53
	I0625 15:56:43.436083   36162 main.go:141] libmachine: (ha-674765-m02) Reserving static IP address...
	I0625 15:56:43.436092   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has current primary IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.436463   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find host DHCP lease matching {name: "ha-674765-m02", mac: "52:54:00:10:f4:2d", ip: "192.168.39.53"} in network mk-ha-674765
	I0625 15:56:43.506554   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Getting to WaitForSSH function...
	I0625 15:56:43.506583   36162 main.go:141] libmachine: (ha-674765-m02) Reserved static IP address: 192.168.39.53
	I0625 15:56:43.506596   36162 main.go:141] libmachine: (ha-674765-m02) Waiting for SSH to be available...
	I0625 15:56:43.509263   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.509624   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:43.509649   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.509853   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Using SSH client type: external
	I0625 15:56:43.509877   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa (-rw-------)
	I0625 15:56:43.509906   36162 main.go:141] libmachine: (ha-674765-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0625 15:56:43.509917   36162 main.go:141] libmachine: (ha-674765-m02) DBG | About to run SSH command:
	I0625 15:56:43.509974   36162 main.go:141] libmachine: (ha-674765-m02) DBG | exit 0
	I0625 15:56:43.638837   36162 main.go:141] libmachine: (ha-674765-m02) DBG | SSH cmd err, output: <nil>: 
	I0625 15:56:43.639138   36162 main.go:141] libmachine: (ha-674765-m02) KVM machine creation complete!
	I0625 15:56:43.639371   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetConfigRaw
	I0625 15:56:43.639968   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:43.640166   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:43.640311   36162 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0625 15:56:43.640328   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetState
	I0625 15:56:43.641693   36162 main.go:141] libmachine: Detecting operating system of created instance...
	I0625 15:56:43.641709   36162 main.go:141] libmachine: Waiting for SSH to be available...
	I0625 15:56:43.641716   36162 main.go:141] libmachine: Getting to WaitForSSH function...
	I0625 15:56:43.641724   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:43.644119   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.644499   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:43.644516   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.644712   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:43.644908   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:43.645089   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:43.645204   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:43.645340   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:56:43.645606   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0625 15:56:43.645625   36162 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0625 15:56:43.757512   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 15:56:43.757541   36162 main.go:141] libmachine: Detecting the provisioner...
	I0625 15:56:43.757551   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:43.760543   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.760942   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:43.760965   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.761120   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:43.761298   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:43.761432   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:43.761540   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:43.761659   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:56:43.761861   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0625 15:56:43.761874   36162 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0625 15:56:43.879100   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0625 15:56:43.879182   36162 main.go:141] libmachine: found compatible host: buildroot
	I0625 15:56:43.879191   36162 main.go:141] libmachine: Provisioning with buildroot...
	I0625 15:56:43.879198   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetMachineName
	I0625 15:56:43.879420   36162 buildroot.go:166] provisioning hostname "ha-674765-m02"
	I0625 15:56:43.879450   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetMachineName
	I0625 15:56:43.879603   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:43.882190   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.882586   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:43.882613   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.882793   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:43.882966   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:43.883121   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:43.883220   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:43.883387   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:56:43.883588   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0625 15:56:43.883606   36162 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-674765-m02 && echo "ha-674765-m02" | sudo tee /etc/hostname
	I0625 15:56:44.008949   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-674765-m02
	
	I0625 15:56:44.008972   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:44.011836   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.012247   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.012278   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.012472   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:44.012645   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.012804   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.012914   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:44.013049   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:56:44.013219   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0625 15:56:44.013242   36162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-674765-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-674765-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-674765-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 15:56:44.131935   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 15:56:44.131962   36162 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 15:56:44.131976   36162 buildroot.go:174] setting up certificates
	I0625 15:56:44.131985   36162 provision.go:84] configureAuth start
	I0625 15:56:44.131996   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetMachineName
	I0625 15:56:44.132256   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 15:56:44.135231   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.135590   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.135634   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.135776   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:44.138252   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.138732   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.138757   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.138892   36162 provision.go:143] copyHostCerts
	I0625 15:56:44.138922   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 15:56:44.138950   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem, removing ...
	I0625 15:56:44.138959   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 15:56:44.139024   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 15:56:44.139107   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 15:56:44.139141   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem, removing ...
	I0625 15:56:44.139151   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 15:56:44.139194   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 15:56:44.139270   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 15:56:44.139295   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem, removing ...
	I0625 15:56:44.139299   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 15:56:44.139328   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 15:56:44.139382   36162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.ha-674765-m02 san=[127.0.0.1 192.168.39.53 ha-674765-m02 localhost minikube]
	I0625 15:56:44.264356   36162 provision.go:177] copyRemoteCerts
	I0625 15:56:44.264406   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 15:56:44.264426   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:44.267152   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.267510   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.267531   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.267689   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:44.267905   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.268074   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:44.268226   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	I0625 15:56:44.356736   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0625 15:56:44.356805   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 15:56:44.383296   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0625 15:56:44.383365   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0625 15:56:44.408362   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0625 15:56:44.408436   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0625 15:56:44.436178   36162 provision.go:87] duration metric: took 304.180992ms to configureAuth
	I0625 15:56:44.436205   36162 buildroot.go:189] setting minikube options for container-runtime
	I0625 15:56:44.436414   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:56:44.436506   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:44.439256   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.439568   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.439588   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.439775   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:44.439952   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.440094   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.440218   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:44.440327   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:56:44.440477   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0625 15:56:44.440491   36162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 15:56:44.705173   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 15:56:44.705203   36162 main.go:141] libmachine: Checking connection to Docker...
	I0625 15:56:44.705214   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetURL
	I0625 15:56:44.706585   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Using libvirt version 6000000
	I0625 15:56:44.709060   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.709569   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.709596   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.709795   36162 main.go:141] libmachine: Docker is up and running!
	I0625 15:56:44.709819   36162 main.go:141] libmachine: Reticulating splines...
	I0625 15:56:44.709828   36162 client.go:171] duration metric: took 25.485406116s to LocalClient.Create
	I0625 15:56:44.709853   36162 start.go:167] duration metric: took 25.485464391s to libmachine.API.Create "ha-674765"
	I0625 15:56:44.709865   36162 start.go:293] postStartSetup for "ha-674765-m02" (driver="kvm2")
	I0625 15:56:44.709879   36162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 15:56:44.709902   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:44.710129   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 15:56:44.710156   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:44.712436   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.712772   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.712797   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.712982   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:44.713161   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.713312   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:44.713458   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	I0625 15:56:44.801536   36162 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 15:56:44.805686   36162 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 15:56:44.805710   36162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 15:56:44.805779   36162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 15:56:44.805859   36162 filesync.go:149] local asset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> 212392.pem in /etc/ssl/certs
	I0625 15:56:44.805869   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /etc/ssl/certs/212392.pem
	I0625 15:56:44.805944   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0625 15:56:44.815391   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /etc/ssl/certs/212392.pem (1708 bytes)
	I0625 15:56:44.838164   36162 start.go:296] duration metric: took 128.283548ms for postStartSetup
	I0625 15:56:44.838208   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetConfigRaw
	I0625 15:56:44.838767   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 15:56:44.841210   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.841590   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.841617   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.841846   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:56:44.842066   36162 start.go:128] duration metric: took 25.634681289s to createHost
	I0625 15:56:44.842088   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:44.844130   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.844486   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.844513   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.844694   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:44.844859   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.845009   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.845126   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:44.845307   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:56:44.845471   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0625 15:56:44.845488   36162 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0625 15:56:44.959485   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719331004.936443927
	
	I0625 15:56:44.959508   36162 fix.go:216] guest clock: 1719331004.936443927
	I0625 15:56:44.959518   36162 fix.go:229] Guest: 2024-06-25 15:56:44.936443927 +0000 UTC Remote: 2024-06-25 15:56:44.842078261 +0000 UTC m=+80.209901183 (delta=94.365666ms)
	I0625 15:56:44.959542   36162 fix.go:200] guest clock delta is within tolerance: 94.365666ms
	I0625 15:56:44.959549   36162 start.go:83] releasing machines lock for "ha-674765-m02", held for 25.752244408s
	I0625 15:56:44.959580   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:44.959844   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 15:56:44.962408   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.962838   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.962870   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.965323   36162 out.go:177] * Found network options:
	I0625 15:56:44.966887   36162 out.go:177]   - NO_PROXY=192.168.39.128
	W0625 15:56:44.968395   36162 proxy.go:119] fail to check proxy env: Error ip not in block
	I0625 15:56:44.968435   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:44.968940   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:44.969145   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:44.969199   36162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 15:56:44.969240   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	W0625 15:56:44.969308   36162 proxy.go:119] fail to check proxy env: Error ip not in block
	I0625 15:56:44.969384   36162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 15:56:44.969403   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:44.972259   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.972467   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.972653   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.972678   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.972795   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:44.972933   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.972962   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.972979   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.973098   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:44.973145   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:44.973228   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.973319   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	I0625 15:56:44.973469   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:44.973610   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	I0625 15:56:45.205451   36162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0625 15:56:45.211847   36162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 15:56:45.211914   36162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 15:56:45.229533   36162 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0625 15:56:45.229564   36162 start.go:494] detecting cgroup driver to use...
	I0625 15:56:45.229628   36162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 15:56:45.247009   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 15:56:45.260421   36162 docker.go:217] disabling cri-docker service (if available) ...
	I0625 15:56:45.260480   36162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 15:56:45.273876   36162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 15:56:45.286958   36162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 15:56:45.403810   36162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 15:56:45.550586   36162 docker.go:233] disabling docker service ...
	I0625 15:56:45.550655   36162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 15:56:45.564489   36162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 15:56:45.576838   36162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 15:56:45.708091   36162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 15:56:45.846107   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 15:56:45.860205   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 15:56:45.879876   36162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0625 15:56:45.879925   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:56:45.891391   36162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 15:56:45.891465   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:56:45.902882   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:56:45.914347   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:56:45.926912   36162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 15:56:45.939261   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:56:45.951330   36162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:56:45.970241   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:56:45.982394   36162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 15:56:45.993515   36162 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0625 15:56:45.993554   36162 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0625 15:56:46.009455   36162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 15:56:46.021074   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:56:46.147216   36162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 15:56:46.283042   36162 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 15:56:46.283099   36162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 15:56:46.288405   36162 start.go:562] Will wait 60s for crictl version
	I0625 15:56:46.288459   36162 ssh_runner.go:195] Run: which crictl
	I0625 15:56:46.292293   36162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 15:56:46.339974   36162 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 15:56:46.340069   36162 ssh_runner.go:195] Run: crio --version
	I0625 15:56:46.374253   36162 ssh_runner.go:195] Run: crio --version
	I0625 15:56:46.403924   36162 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0625 15:56:46.405251   36162 out.go:177]   - env NO_PROXY=192.168.39.128
	I0625 15:56:46.406413   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 15:56:46.409391   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:46.409787   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:46.409814   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:46.410095   36162 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0625 15:56:46.415414   36162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 15:56:46.428410   36162 mustload.go:65] Loading cluster: ha-674765
	I0625 15:56:46.428590   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:56:46.428858   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:46.428886   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:46.443673   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36117
	I0625 15:56:46.444052   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:46.444465   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:46.444480   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:46.444814   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:46.444987   36162 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 15:56:46.446627   36162 host.go:66] Checking if "ha-674765" exists ...
	I0625 15:56:46.446893   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:46.446914   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:46.460420   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44941
	I0625 15:56:46.460784   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:46.461162   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:46.461184   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:46.461438   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:46.461643   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:56:46.461809   36162 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765 for IP: 192.168.39.53
	I0625 15:56:46.461821   36162 certs.go:194] generating shared ca certs ...
	I0625 15:56:46.461841   36162 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:56:46.461965   36162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 15:56:46.462017   36162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 15:56:46.462042   36162 certs.go:256] generating profile certs ...
	I0625 15:56:46.462130   36162 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key
	I0625 15:56:46.462158   36162 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.3cf33f8e
	I0625 15:56:46.462178   36162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.3cf33f8e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.53 192.168.39.254]
	I0625 15:56:46.776861   36162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.3cf33f8e ...
	I0625 15:56:46.776891   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.3cf33f8e: {Name:mk63bfac5d652837104707bb3a98a9a6114ad62b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:56:46.777070   36162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.3cf33f8e ...
	I0625 15:56:46.777089   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.3cf33f8e: {Name:mk0954e4ee17ed2229bef891eb165210e12ccf5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:56:46.777190   36162 certs.go:381] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.3cf33f8e -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt
	I0625 15:56:46.777337   36162 certs.go:385] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.3cf33f8e -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key
	I0625 15:56:46.777499   36162 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key
	I0625 15:56:46.777516   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0625 15:56:46.777533   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0625 15:56:46.777550   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0625 15:56:46.777570   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0625 15:56:46.777589   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0625 15:56:46.777607   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0625 15:56:46.777625   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0625 15:56:46.777643   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0625 15:56:46.777701   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem (1338 bytes)
	W0625 15:56:46.777738   36162 certs.go:480] ignoring /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239_empty.pem, impossibly tiny 0 bytes
	I0625 15:56:46.777751   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 15:56:46.777789   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 15:56:46.777820   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 15:56:46.777852   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 15:56:46.777908   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem (1708 bytes)
	I0625 15:56:46.777945   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /usr/share/ca-certificates/212392.pem
	I0625 15:56:46.777965   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:56:46.777983   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem -> /usr/share/ca-certificates/21239.pem
	I0625 15:56:46.778020   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:56:46.780624   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:46.780925   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:56:46.780948   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:46.781144   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:56:46.781339   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:56:46.781501   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:56:46.781649   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:56:46.858845   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0625 15:56:46.864476   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0625 15:56:46.875910   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0625 15:56:46.880109   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0625 15:56:46.890533   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0625 15:56:46.894910   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0625 15:56:46.905170   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0625 15:56:46.909338   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0625 15:56:46.920068   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0625 15:56:46.924246   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0625 15:56:46.934395   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0625 15:56:46.938224   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0625 15:56:46.948370   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 15:56:46.976834   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 15:56:47.009142   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 15:56:47.033961   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 15:56:47.058231   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0625 15:56:47.082360   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0625 15:56:47.106992   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 15:56:47.130587   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0625 15:56:47.153854   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /usr/share/ca-certificates/212392.pem (1708 bytes)
	I0625 15:56:47.176770   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 15:56:47.199826   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem --> /usr/share/ca-certificates/21239.pem (1338 bytes)
	I0625 15:56:47.223519   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0625 15:56:47.240420   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0625 15:56:47.257079   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0625 15:56:47.273371   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0625 15:56:47.289547   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0625 15:56:47.305756   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0625 15:56:47.322911   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0625 15:56:47.339666   36162 ssh_runner.go:195] Run: openssl version
	I0625 15:56:47.345606   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212392.pem && ln -fs /usr/share/ca-certificates/212392.pem /etc/ssl/certs/212392.pem"
	I0625 15:56:47.357087   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212392.pem
	I0625 15:56:47.362012   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 15:56:47.362083   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212392.pem
	I0625 15:56:47.368592   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/212392.pem /etc/ssl/certs/3ec20f2e.0"
	I0625 15:56:47.379518   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 15:56:47.390127   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:56:47.394519   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:56:47.394563   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:56:47.400180   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 15:56:47.410872   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21239.pem && ln -fs /usr/share/ca-certificates/21239.pem /etc/ssl/certs/21239.pem"
	I0625 15:56:47.421558   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21239.pem
	I0625 15:56:47.425788   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 15:56:47.425837   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21239.pem
	I0625 15:56:47.431468   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21239.pem /etc/ssl/certs/51391683.0"
	I0625 15:56:47.441799   36162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 15:56:47.445765   36162 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0625 15:56:47.445832   36162 kubeadm.go:928] updating node {m02 192.168.39.53 8443 v1.30.2 crio true true} ...
	I0625 15:56:47.445939   36162 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-674765-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 15:56:47.445976   36162 kube-vip.go:115] generating kube-vip config ...
	I0625 15:56:47.446018   36162 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0625 15:56:47.463886   36162 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0625 15:56:47.463955   36162 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0625 15:56:47.464005   36162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0625 15:56:47.473844   36162 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0625 15:56:47.473931   36162 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0625 15:56:47.483476   36162 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0625 15:56:47.483503   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0625 15:56:47.483567   36162 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0625 15:56:47.483596   36162 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0625 15:56:47.483574   36162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0625 15:56:47.488437   36162 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0625 15:56:47.488473   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0625 15:56:48.371648   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0625 15:56:48.371718   36162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0625 15:56:48.377117   36162 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0625 15:56:48.377149   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0625 15:56:49.145989   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 15:56:49.161457   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0625 15:56:49.161542   36162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0625 15:56:49.165852   36162 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0625 15:56:49.165886   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0625 15:56:49.573619   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0625 15:56:49.583407   36162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0625 15:56:49.601195   36162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 15:56:49.618903   36162 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0625 15:56:49.637792   36162 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0625 15:56:49.641936   36162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 15:56:49.656165   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:56:49.785739   36162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 15:56:49.803909   36162 host.go:66] Checking if "ha-674765" exists ...
	I0625 15:56:49.804349   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:49.804398   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:49.818976   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
	I0625 15:56:49.819444   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:49.819947   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:49.819971   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:49.820338   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:49.820532   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:56:49.820733   36162 start.go:316] joinCluster: &{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 15:56:49.820832   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0625 15:56:49.820846   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:56:49.823597   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:49.823989   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:56:49.824021   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:49.824195   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:56:49.824369   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:56:49.824528   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:56:49.824653   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:56:49.988030   36162 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:56:49.988087   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rsvyh1.iisaun8ql3zel5y7 --discovery-token-ca-cert-hash sha256:df4523a4334c80aff4a7c2fc7b4a73691744a675a28cdb3d4468287f693ab03d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-674765-m02 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I0625 15:57:11.986265   36162 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rsvyh1.iisaun8ql3zel5y7 --discovery-token-ca-cert-hash sha256:df4523a4334c80aff4a7c2fc7b4a73691744a675a28cdb3d4468287f693ab03d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-674765-m02 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (21.998151766s)
	I0625 15:57:11.986295   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0625 15:57:12.562932   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-674765-m02 minikube.k8s.io/updated_at=2024_06_25T15_57_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b minikube.k8s.io/name=ha-674765 minikube.k8s.io/primary=false
	I0625 15:57:12.672103   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-674765-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0625 15:57:12.767521   36162 start.go:318] duration metric: took 22.946781224s to joinCluster
	I0625 15:57:12.767613   36162 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:57:12.767916   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:57:12.768897   36162 out.go:177] * Verifying Kubernetes components...
	I0625 15:57:12.770051   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:57:13.004125   36162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 15:57:13.032881   36162 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:57:13.033081   36162 kapi.go:59] client config for ha-674765: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.crt", KeyFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key", CAFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0625 15:57:13.033137   36162 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.128:8443
	I0625 15:57:13.033307   36162 node_ready.go:35] waiting up to 6m0s for node "ha-674765-m02" to be "Ready" ...
	I0625 15:57:13.033373   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:13.033381   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:13.033388   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:13.033392   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:13.043431   36162 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0625 15:57:13.534410   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:13.534428   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:13.534438   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:13.534441   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:13.538182   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:14.034306   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:14.034326   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:14.034338   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:14.034345   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:14.039446   36162 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0625 15:57:14.533963   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:14.533985   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:14.533992   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:14.533997   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:14.537144   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:15.034450   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:15.034483   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:15.034491   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:15.034494   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:15.037652   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:15.038110   36162 node_ready.go:53] node "ha-674765-m02" has status "Ready":"False"
	I0625 15:57:15.534176   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:15.534194   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:15.534202   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:15.534206   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:15.537432   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:16.034503   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:16.034523   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:16.034531   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:16.034535   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:16.040112   36162 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0625 15:57:16.534069   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:16.534090   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:16.534098   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:16.534102   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:16.537497   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:17.034500   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:17.034522   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:17.034531   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:17.034536   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:17.037757   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:17.038665   36162 node_ready.go:53] node "ha-674765-m02" has status "Ready":"False"
	I0625 15:57:17.533937   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:17.533966   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:17.533978   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:17.533990   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:17.536681   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:18.033555   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:18.033576   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:18.033584   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:18.033588   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:18.037070   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:18.534407   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:18.534427   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:18.534435   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:18.534439   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:18.537330   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:19.033518   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:19.033540   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:19.033550   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:19.033556   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:19.036885   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:19.534060   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:19.534083   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:19.534091   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:19.534094   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:19.537345   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:19.537969   36162 node_ready.go:53] node "ha-674765-m02" has status "Ready":"False"
	I0625 15:57:20.034304   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:20.034323   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.034333   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.034339   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.037226   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:20.534256   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:20.534274   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.534282   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.534286   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.537337   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:20.537996   36162 node_ready.go:49] node "ha-674765-m02" has status "Ready":"True"
	I0625 15:57:20.538014   36162 node_ready.go:38] duration metric: took 7.50469233s for node "ha-674765-m02" to be "Ready" ...
	I0625 15:57:20.538024   36162 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 15:57:20.538088   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:57:20.538099   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.538109   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.538116   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.542271   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:57:20.548231   36162 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-28db5" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:20.548316   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-28db5
	I0625 15:57:20.548326   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.548336   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.548343   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.550570   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:20.551195   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:20.551209   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.551216   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.551221   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.553381   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:20.554110   36162 pod_ready.go:92] pod "coredns-7db6d8ff4d-28db5" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:20.554130   36162 pod_ready.go:81] duration metric: took 5.877818ms for pod "coredns-7db6d8ff4d-28db5" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:20.554142   36162 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-84zkt" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:20.554198   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-84zkt
	I0625 15:57:20.554209   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.554219   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.554226   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.556348   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:20.557071   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:20.557084   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.557091   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.557096   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.559058   36162 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0625 15:57:20.559525   36162 pod_ready.go:92] pod "coredns-7db6d8ff4d-84zkt" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:20.559538   36162 pod_ready.go:81] duration metric: took 5.389642ms for pod "coredns-7db6d8ff4d-84zkt" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:20.559546   36162 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:20.559581   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765
	I0625 15:57:20.559589   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.559595   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.559599   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.561747   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:20.562190   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:20.562201   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.562207   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.562211   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.564120   36162 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0625 15:57:20.564704   36162 pod_ready.go:92] pod "etcd-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:20.564720   36162 pod_ready.go:81] duration metric: took 5.168595ms for pod "etcd-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:20.564729   36162 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:20.564781   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:20.564791   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.564801   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.564808   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.567173   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:20.567735   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:20.567747   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.567762   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.567769   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.570009   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:21.064954   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:21.064981   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:21.064992   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:21.064998   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:21.068724   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:21.069264   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:21.069279   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:21.069286   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:21.069292   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:21.071145   36162 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0625 15:57:21.565723   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:21.565741   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:21.565749   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:21.565753   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:21.568580   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:21.569194   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:21.569209   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:21.569217   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:21.569222   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:21.571774   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:22.065633   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:22.065654   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:22.065662   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:22.065666   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:22.068975   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:22.069634   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:22.069650   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:22.069659   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:22.069665   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:22.072405   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:22.565625   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:22.565647   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:22.565657   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:22.565662   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:22.568873   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:22.569409   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:22.569422   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:22.569431   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:22.569436   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:22.571772   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:22.572258   36162 pod_ready.go:102] pod "etcd-ha-674765-m02" in "kube-system" namespace has status "Ready":"False"
	I0625 15:57:23.065702   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:23.065723   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:23.065731   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:23.065735   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:23.068905   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:23.069772   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:23.069789   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:23.069797   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:23.069802   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:23.072443   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:23.565587   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:23.565606   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:23.565614   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:23.565619   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:23.568586   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:23.569632   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:23.569653   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:23.569663   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:23.569668   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:23.573538   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:24.064876   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:24.064897   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:24.064905   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:24.064911   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:24.068269   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:24.069052   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:24.069065   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:24.069072   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:24.069076   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:24.071471   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:24.564911   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:24.564935   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:24.564947   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:24.564953   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:24.568341   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:24.568952   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:24.568966   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:24.568974   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:24.568979   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:24.571934   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:24.572331   36162 pod_ready.go:102] pod "etcd-ha-674765-m02" in "kube-system" namespace has status "Ready":"False"
	I0625 15:57:25.065911   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:25.065931   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:25.065939   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:25.065943   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:25.068874   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:25.069432   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:25.069447   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:25.069454   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:25.069458   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:25.072150   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:25.565017   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:25.565035   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:25.565043   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:25.565046   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:25.568134   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:25.568746   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:25.568760   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:25.568767   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:25.568772   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:25.571138   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:26.064981   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:26.065002   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.065012   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.065018   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.068072   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:26.068948   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:26.068964   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.068971   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.068974   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.071400   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:26.564852   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:26.564873   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.564881   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.564886   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.568031   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:26.568891   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:26.568910   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.568917   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.568922   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.571362   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:26.571905   36162 pod_ready.go:92] pod "etcd-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:26.571922   36162 pod_ready.go:81] duration metric: took 6.007184595s for pod "etcd-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:26.571940   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:26.571993   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765
	I0625 15:57:26.572003   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.572012   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.572021   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.574441   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:26.575212   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:26.575227   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.575233   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.575238   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.577293   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:26.577866   36162 pod_ready.go:92] pod "kube-apiserver-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:26.577884   36162 pod_ready.go:81] duration metric: took 5.936767ms for pod "kube-apiserver-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:26.577895   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:26.577956   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:26.577964   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.577971   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.577979   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.580097   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:26.580708   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:26.580722   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.580729   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.580734   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.582765   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:27.078811   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:27.078837   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:27.078848   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:27.078853   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:27.081973   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:27.082745   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:27.082759   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:27.082766   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:27.082772   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:27.085337   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:27.578151   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:27.578171   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:27.578178   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:27.578182   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:27.581219   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:27.581951   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:27.581967   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:27.581974   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:27.581978   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:27.584824   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:28.078904   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:28.078928   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:28.078938   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:28.078944   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:28.082005   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:28.082825   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:28.082842   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:28.082851   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:28.082858   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:28.085426   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:28.578694   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:28.578716   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:28.578727   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:28.578733   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:28.581575   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:28.582541   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:28.582556   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:28.582566   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:28.582572   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:28.584998   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:28.585482   36162 pod_ready.go:102] pod "kube-apiserver-ha-674765-m02" in "kube-system" namespace has status "Ready":"False"
	I0625 15:57:29.078896   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:29.078916   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:29.078924   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:29.078928   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:29.082136   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:29.083150   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:29.083173   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:29.083182   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:29.083187   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:29.085938   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:29.578152   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:29.578172   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:29.578179   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:29.578182   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:29.580956   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:29.581742   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:29.581764   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:29.581775   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:29.581784   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:29.584418   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.078413   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:30.078434   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.078444   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.078453   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.081862   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:30.082598   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:30.082616   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.082626   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.082643   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.085130   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.085634   36162 pod_ready.go:92] pod "kube-apiserver-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:30.085653   36162 pod_ready.go:81] duration metric: took 3.507746266s for pod "kube-apiserver-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.085666   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.085718   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765
	I0625 15:57:30.085727   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.085737   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.085742   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.088893   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:30.090008   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:30.090023   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.090033   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.090039   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.092465   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.093045   36162 pod_ready.go:92] pod "kube-controller-manager-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:30.093068   36162 pod_ready.go:81] duration metric: took 7.394198ms for pod "kube-controller-manager-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.093078   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.093117   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765-m02
	I0625 15:57:30.093126   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.093132   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.093135   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.095802   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.096367   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:30.096379   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.096386   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.096390   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.098214   36162 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0625 15:57:30.098647   36162 pod_ready.go:92] pod "kube-controller-manager-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:30.098661   36162 pod_ready.go:81] duration metric: took 5.577923ms for pod "kube-controller-manager-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.098668   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lsmft" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.098709   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lsmft
	I0625 15:57:30.098716   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.098723   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.098726   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.100989   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.134791   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:30.134806   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.134814   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.134820   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.137029   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.137573   36162 pod_ready.go:92] pod "kube-proxy-lsmft" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:30.137590   36162 pod_ready.go:81] duration metric: took 38.915586ms for pod "kube-proxy-lsmft" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.137600   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rh9n5" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.335009   36162 request.go:629] Waited for 197.354925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rh9n5
	I0625 15:57:30.335063   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rh9n5
	I0625 15:57:30.335070   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.335082   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.335090   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.338543   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:30.534537   36162 request.go:629] Waited for 195.314147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:30.534621   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:30.534631   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.534643   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.534652   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.538384   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:30.539076   36162 pod_ready.go:92] pod "kube-proxy-rh9n5" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:30.539095   36162 pod_ready.go:81] duration metric: took 401.488432ms for pod "kube-proxy-rh9n5" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.539106   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.735247   36162 request.go:629] Waited for 196.079864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765
	I0625 15:57:30.735325   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765
	I0625 15:57:30.735344   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.735369   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.735377   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.738144   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.934342   36162 request.go:629] Waited for 195.252677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:30.934435   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:30.934452   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.934459   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.934463   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.936872   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.937419   36162 pod_ready.go:92] pod "kube-scheduler-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:30.937438   36162 pod_ready.go:81] duration metric: took 398.324735ms for pod "kube-scheduler-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.937446   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:31.134503   36162 request.go:629] Waited for 196.991431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765-m02
	I0625 15:57:31.134579   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765-m02
	I0625 15:57:31.134587   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:31.134597   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:31.134604   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:31.137530   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:31.334415   36162 request.go:629] Waited for 196.279639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:31.334489   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:31.334514   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:31.334522   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:31.334527   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:31.337333   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:31.338097   36162 pod_ready.go:92] pod "kube-scheduler-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:31.338118   36162 pod_ready.go:81] duration metric: took 400.664445ms for pod "kube-scheduler-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:31.338132   36162 pod_ready.go:38] duration metric: took 10.800092753s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 15:57:31.338152   36162 api_server.go:52] waiting for apiserver process to appear ...
	I0625 15:57:31.338198   36162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 15:57:31.354959   36162 api_server.go:72] duration metric: took 18.587310981s to wait for apiserver process to appear ...
	I0625 15:57:31.354974   36162 api_server.go:88] waiting for apiserver healthz status ...
	I0625 15:57:31.354989   36162 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0625 15:57:31.360620   36162 api_server.go:279] https://192.168.39.128:8443/healthz returned 200:
	ok
	I0625 15:57:31.360687   36162 round_trippers.go:463] GET https://192.168.39.128:8443/version
	I0625 15:57:31.360700   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:31.360711   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:31.360722   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:31.361509   36162 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0625 15:57:31.361608   36162 api_server.go:141] control plane version: v1.30.2
	I0625 15:57:31.361626   36162 api_server.go:131] duration metric: took 6.646092ms to wait for apiserver health ...
	I0625 15:57:31.361635   36162 system_pods.go:43] waiting for kube-system pods to appear ...
	I0625 15:57:31.534552   36162 request.go:629] Waited for 172.857921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:57:31.534608   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:57:31.534613   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:31.534621   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:31.534624   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:31.540074   36162 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0625 15:57:31.544624   36162 system_pods.go:59] 17 kube-system pods found
	I0625 15:57:31.544648   36162 system_pods.go:61] "coredns-7db6d8ff4d-28db5" [1426e4a3-2f25-47e9-9b28-b23a81a3a19a] Running
	I0625 15:57:31.544653   36162 system_pods.go:61] "coredns-7db6d8ff4d-84zkt" [2f6426f8-a0c4-470c-b2b1-b62fa304c078] Running
	I0625 15:57:31.544658   36162 system_pods.go:61] "etcd-ha-674765" [a8f7d82c-8fc7-4190-99c2-0bedc24d8f4f] Running
	I0625 15:57:31.544661   36162 system_pods.go:61] "etcd-ha-674765-m02" [e3f94832-96fe-4bbf-8c53-86bab692b6a9] Running
	I0625 15:57:31.544664   36162 system_pods.go:61] "kindnet-kkgdq" [cfb408ee-0f73-4537-87fb-fad3d2b1f3f1] Running
	I0625 15:57:31.544667   36162 system_pods.go:61] "kindnet-ntq77" [37736a9f-5b4c-421c-9027-81e961ab8550] Running
	I0625 15:57:31.544670   36162 system_pods.go:61] "kube-apiserver-ha-674765" [594e5a19-d80b-4b26-8c91-a8475fb99630] Running
	I0625 15:57:31.544673   36162 system_pods.go:61] "kube-apiserver-ha-674765-m02" [e00ad102-e252-49e9-82e4-b466ae4eb7b2] Running
	I0625 15:57:31.544676   36162 system_pods.go:61] "kube-controller-manager-ha-674765" [5f4f1e7d-f796-4762-9f33-61755c0daef3] Running
	I0625 15:57:31.544679   36162 system_pods.go:61] "kube-controller-manager-ha-674765-m02" [acb4b5ca-b29e-4866-be68-eb4c6425463d] Running
	I0625 15:57:31.544682   36162 system_pods.go:61] "kube-proxy-lsmft" [fa5d210a-1295-497c-8a24-6a0f0dc941de] Running
	I0625 15:57:31.544684   36162 system_pods.go:61] "kube-proxy-rh9n5" [a0a24539-3168-42cc-93b3-d0b1e283d0bd] Running
	I0625 15:57:31.544687   36162 system_pods.go:61] "kube-scheduler-ha-674765" [2695280a-4dd5-4073-875e-63e5238bd1b7] Running
	I0625 15:57:31.544690   36162 system_pods.go:61] "kube-scheduler-ha-674765-m02" [dc04f489-1084-48d4-8cec-c79ec30e0987] Running
	I0625 15:57:31.544692   36162 system_pods.go:61] "kube-vip-ha-674765" [1d132475-65bb-43d1-9353-12b7be1f311f] Running
	I0625 15:57:31.544695   36162 system_pods.go:61] "kube-vip-ha-674765-m02" [dbde28c7-a109-4a7e-97bb-27576a94d2fe] Running
	I0625 15:57:31.544698   36162 system_pods.go:61] "storage-provisioner" [c227c5cf-2bd6-4ebf-9fdb-09d4229cf421] Running
	I0625 15:57:31.544704   36162 system_pods.go:74] duration metric: took 183.060621ms to wait for pod list to return data ...
	I0625 15:57:31.544714   36162 default_sa.go:34] waiting for default service account to be created ...
	I0625 15:57:31.735105   36162 request.go:629] Waited for 190.327717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0625 15:57:31.735155   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0625 15:57:31.735160   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:31.735167   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:31.735170   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:31.738732   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:31.739002   36162 default_sa.go:45] found service account: "default"
	I0625 15:57:31.739025   36162 default_sa.go:55] duration metric: took 194.303559ms for default service account to be created ...
	I0625 15:57:31.739035   36162 system_pods.go:116] waiting for k8s-apps to be running ...
	I0625 15:57:31.934362   36162 request.go:629] Waited for 195.267283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:57:31.934438   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:57:31.934444   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:31.934451   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:31.934459   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:31.939237   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:57:31.943992   36162 system_pods.go:86] 17 kube-system pods found
	I0625 15:57:31.944014   36162 system_pods.go:89] "coredns-7db6d8ff4d-28db5" [1426e4a3-2f25-47e9-9b28-b23a81a3a19a] Running
	I0625 15:57:31.944020   36162 system_pods.go:89] "coredns-7db6d8ff4d-84zkt" [2f6426f8-a0c4-470c-b2b1-b62fa304c078] Running
	I0625 15:57:31.944024   36162 system_pods.go:89] "etcd-ha-674765" [a8f7d82c-8fc7-4190-99c2-0bedc24d8f4f] Running
	I0625 15:57:31.944028   36162 system_pods.go:89] "etcd-ha-674765-m02" [e3f94832-96fe-4bbf-8c53-86bab692b6a9] Running
	I0625 15:57:31.944031   36162 system_pods.go:89] "kindnet-kkgdq" [cfb408ee-0f73-4537-87fb-fad3d2b1f3f1] Running
	I0625 15:57:31.944035   36162 system_pods.go:89] "kindnet-ntq77" [37736a9f-5b4c-421c-9027-81e961ab8550] Running
	I0625 15:57:31.944044   36162 system_pods.go:89] "kube-apiserver-ha-674765" [594e5a19-d80b-4b26-8c91-a8475fb99630] Running
	I0625 15:57:31.944048   36162 system_pods.go:89] "kube-apiserver-ha-674765-m02" [e00ad102-e252-49e9-82e4-b466ae4eb7b2] Running
	I0625 15:57:31.944052   36162 system_pods.go:89] "kube-controller-manager-ha-674765" [5f4f1e7d-f796-4762-9f33-61755c0daef3] Running
	I0625 15:57:31.944056   36162 system_pods.go:89] "kube-controller-manager-ha-674765-m02" [acb4b5ca-b29e-4866-be68-eb4c6425463d] Running
	I0625 15:57:31.944061   36162 system_pods.go:89] "kube-proxy-lsmft" [fa5d210a-1295-497c-8a24-6a0f0dc941de] Running
	I0625 15:57:31.944065   36162 system_pods.go:89] "kube-proxy-rh9n5" [a0a24539-3168-42cc-93b3-d0b1e283d0bd] Running
	I0625 15:57:31.944068   36162 system_pods.go:89] "kube-scheduler-ha-674765" [2695280a-4dd5-4073-875e-63e5238bd1b7] Running
	I0625 15:57:31.944072   36162 system_pods.go:89] "kube-scheduler-ha-674765-m02" [dc04f489-1084-48d4-8cec-c79ec30e0987] Running
	I0625 15:57:31.944076   36162 system_pods.go:89] "kube-vip-ha-674765" [1d132475-65bb-43d1-9353-12b7be1f311f] Running
	I0625 15:57:31.944079   36162 system_pods.go:89] "kube-vip-ha-674765-m02" [dbde28c7-a109-4a7e-97bb-27576a94d2fe] Running
	I0625 15:57:31.944082   36162 system_pods.go:89] "storage-provisioner" [c227c5cf-2bd6-4ebf-9fdb-09d4229cf421] Running
	I0625 15:57:31.944088   36162 system_pods.go:126] duration metric: took 205.047376ms to wait for k8s-apps to be running ...
	I0625 15:57:31.944097   36162 system_svc.go:44] waiting for kubelet service to be running ....
	I0625 15:57:31.944138   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 15:57:31.960094   36162 system_svc.go:56] duration metric: took 15.988807ms WaitForService to wait for kubelet
	I0625 15:57:31.960116   36162 kubeadm.go:576] duration metric: took 19.192468967s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 15:57:31.960134   36162 node_conditions.go:102] verifying NodePressure condition ...
	I0625 15:57:32.134343   36162 request.go:629] Waited for 174.153112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes
	I0625 15:57:32.134416   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes
	I0625 15:57:32.134427   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:32.134441   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:32.134450   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:32.137663   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:32.138464   36162 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0625 15:57:32.138508   36162 node_conditions.go:123] node cpu capacity is 2
	I0625 15:57:32.138519   36162 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0625 15:57:32.138523   36162 node_conditions.go:123] node cpu capacity is 2
	I0625 15:57:32.138527   36162 node_conditions.go:105] duration metric: took 178.388689ms to run NodePressure ...
	I0625 15:57:32.138538   36162 start.go:240] waiting for startup goroutines ...
	I0625 15:57:32.138559   36162 start.go:254] writing updated cluster config ...
	I0625 15:57:32.140399   36162 out.go:177] 
	I0625 15:57:32.141783   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:57:32.141866   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:57:32.143394   36162 out.go:177] * Starting "ha-674765-m03" control-plane node in "ha-674765" cluster
	I0625 15:57:32.144529   36162 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 15:57:32.144548   36162 cache.go:56] Caching tarball of preloaded images
	I0625 15:57:32.144629   36162 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 15:57:32.144639   36162 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0625 15:57:32.144725   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:57:32.144869   36162 start.go:360] acquireMachinesLock for ha-674765-m03: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 15:57:32.144904   36162 start.go:364] duration metric: took 20.207µs to acquireMachinesLock for "ha-674765-m03"
	I0625 15:57:32.144919   36162 start.go:93] Provisioning new machine with config: &{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:57:32.145000   36162 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0625 15:57:32.146413   36162 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0625 15:57:32.146497   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:57:32.146527   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:57:32.161533   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37297
	I0625 15:57:32.161857   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:57:32.162239   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:57:32.162262   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:57:32.162557   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:57:32.162765   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetMachineName
	I0625 15:57:32.162921   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:32.163059   36162 start.go:159] libmachine.API.Create for "ha-674765" (driver="kvm2")
	I0625 15:57:32.163087   36162 client.go:168] LocalClient.Create starting
	I0625 15:57:32.163121   36162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem
	I0625 15:57:32.163157   36162 main.go:141] libmachine: Decoding PEM data...
	I0625 15:57:32.163185   36162 main.go:141] libmachine: Parsing certificate...
	I0625 15:57:32.163247   36162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem
	I0625 15:57:32.163274   36162 main.go:141] libmachine: Decoding PEM data...
	I0625 15:57:32.163291   36162 main.go:141] libmachine: Parsing certificate...
	I0625 15:57:32.163324   36162 main.go:141] libmachine: Running pre-create checks...
	I0625 15:57:32.163336   36162 main.go:141] libmachine: (ha-674765-m03) Calling .PreCreateCheck
	I0625 15:57:32.163476   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetConfigRaw
	I0625 15:57:32.163843   36162 main.go:141] libmachine: Creating machine...
	I0625 15:57:32.163858   36162 main.go:141] libmachine: (ha-674765-m03) Calling .Create
	I0625 15:57:32.163976   36162 main.go:141] libmachine: (ha-674765-m03) Creating KVM machine...
	I0625 15:57:32.164992   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found existing default KVM network
	I0625 15:57:32.165138   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found existing private KVM network mk-ha-674765
	I0625 15:57:32.165262   36162 main.go:141] libmachine: (ha-674765-m03) Setting up store path in /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03 ...
	I0625 15:57:32.165284   36162 main.go:141] libmachine: (ha-674765-m03) Building disk image from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso
	I0625 15:57:32.165317   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:32.165244   36953 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:57:32.165396   36162 main.go:141] libmachine: (ha-674765-m03) Downloading /home/jenkins/minikube-integration/19128-13846/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso...
	I0625 15:57:32.386670   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:32.386569   36953 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa...
	I0625 15:57:32.699159   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:32.699058   36953 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/ha-674765-m03.rawdisk...
	I0625 15:57:32.699189   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Writing magic tar header
	I0625 15:57:32.699211   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Writing SSH key tar header
	I0625 15:57:32.699223   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:32.699167   36953 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03 ...
	I0625 15:57:32.699269   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03
	I0625 15:57:32.699289   36162 main.go:141] libmachine: (ha-674765-m03) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03 (perms=drwx------)
	I0625 15:57:32.699313   36162 main.go:141] libmachine: (ha-674765-m03) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines (perms=drwxr-xr-x)
	I0625 15:57:32.699332   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines
	I0625 15:57:32.699344   36162 main.go:141] libmachine: (ha-674765-m03) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube (perms=drwxr-xr-x)
	I0625 15:57:32.699369   36162 main.go:141] libmachine: (ha-674765-m03) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846 (perms=drwxrwxr-x)
	I0625 15:57:32.699386   36162 main.go:141] libmachine: (ha-674765-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0625 15:57:32.699400   36162 main.go:141] libmachine: (ha-674765-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0625 15:57:32.699411   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:57:32.699422   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846
	I0625 15:57:32.699431   36162 main.go:141] libmachine: (ha-674765-m03) Creating domain...
	I0625 15:57:32.699463   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0625 15:57:32.699487   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Checking permissions on dir: /home/jenkins
	I0625 15:57:32.699498   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Checking permissions on dir: /home
	I0625 15:57:32.699506   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Skipping /home - not owner
	I0625 15:57:32.700382   36162 main.go:141] libmachine: (ha-674765-m03) define libvirt domain using xml: 
	I0625 15:57:32.700410   36162 main.go:141] libmachine: (ha-674765-m03) <domain type='kvm'>
	I0625 15:57:32.700420   36162 main.go:141] libmachine: (ha-674765-m03)   <name>ha-674765-m03</name>
	I0625 15:57:32.700428   36162 main.go:141] libmachine: (ha-674765-m03)   <memory unit='MiB'>2200</memory>
	I0625 15:57:32.700437   36162 main.go:141] libmachine: (ha-674765-m03)   <vcpu>2</vcpu>
	I0625 15:57:32.700443   36162 main.go:141] libmachine: (ha-674765-m03)   <features>
	I0625 15:57:32.700450   36162 main.go:141] libmachine: (ha-674765-m03)     <acpi/>
	I0625 15:57:32.700461   36162 main.go:141] libmachine: (ha-674765-m03)     <apic/>
	I0625 15:57:32.700472   36162 main.go:141] libmachine: (ha-674765-m03)     <pae/>
	I0625 15:57:32.700481   36162 main.go:141] libmachine: (ha-674765-m03)     
	I0625 15:57:32.700492   36162 main.go:141] libmachine: (ha-674765-m03)   </features>
	I0625 15:57:32.700509   36162 main.go:141] libmachine: (ha-674765-m03)   <cpu mode='host-passthrough'>
	I0625 15:57:32.700518   36162 main.go:141] libmachine: (ha-674765-m03)   
	I0625 15:57:32.700529   36162 main.go:141] libmachine: (ha-674765-m03)   </cpu>
	I0625 15:57:32.700548   36162 main.go:141] libmachine: (ha-674765-m03)   <os>
	I0625 15:57:32.700561   36162 main.go:141] libmachine: (ha-674765-m03)     <type>hvm</type>
	I0625 15:57:32.700571   36162 main.go:141] libmachine: (ha-674765-m03)     <boot dev='cdrom'/>
	I0625 15:57:32.700582   36162 main.go:141] libmachine: (ha-674765-m03)     <boot dev='hd'/>
	I0625 15:57:32.700590   36162 main.go:141] libmachine: (ha-674765-m03)     <bootmenu enable='no'/>
	I0625 15:57:32.700599   36162 main.go:141] libmachine: (ha-674765-m03)   </os>
	I0625 15:57:32.700608   36162 main.go:141] libmachine: (ha-674765-m03)   <devices>
	I0625 15:57:32.700618   36162 main.go:141] libmachine: (ha-674765-m03)     <disk type='file' device='cdrom'>
	I0625 15:57:32.700652   36162 main.go:141] libmachine: (ha-674765-m03)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/boot2docker.iso'/>
	I0625 15:57:32.700673   36162 main.go:141] libmachine: (ha-674765-m03)       <target dev='hdc' bus='scsi'/>
	I0625 15:57:32.700687   36162 main.go:141] libmachine: (ha-674765-m03)       <readonly/>
	I0625 15:57:32.700699   36162 main.go:141] libmachine: (ha-674765-m03)     </disk>
	I0625 15:57:32.700709   36162 main.go:141] libmachine: (ha-674765-m03)     <disk type='file' device='disk'>
	I0625 15:57:32.700722   36162 main.go:141] libmachine: (ha-674765-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0625 15:57:32.700738   36162 main.go:141] libmachine: (ha-674765-m03)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/ha-674765-m03.rawdisk'/>
	I0625 15:57:32.700754   36162 main.go:141] libmachine: (ha-674765-m03)       <target dev='hda' bus='virtio'/>
	I0625 15:57:32.700770   36162 main.go:141] libmachine: (ha-674765-m03)     </disk>
	I0625 15:57:32.700780   36162 main.go:141] libmachine: (ha-674765-m03)     <interface type='network'>
	I0625 15:57:32.700792   36162 main.go:141] libmachine: (ha-674765-m03)       <source network='mk-ha-674765'/>
	I0625 15:57:32.700803   36162 main.go:141] libmachine: (ha-674765-m03)       <model type='virtio'/>
	I0625 15:57:32.700814   36162 main.go:141] libmachine: (ha-674765-m03)     </interface>
	I0625 15:57:32.700825   36162 main.go:141] libmachine: (ha-674765-m03)     <interface type='network'>
	I0625 15:57:32.700839   36162 main.go:141] libmachine: (ha-674765-m03)       <source network='default'/>
	I0625 15:57:32.700848   36162 main.go:141] libmachine: (ha-674765-m03)       <model type='virtio'/>
	I0625 15:57:32.700855   36162 main.go:141] libmachine: (ha-674765-m03)     </interface>
	I0625 15:57:32.700863   36162 main.go:141] libmachine: (ha-674765-m03)     <serial type='pty'>
	I0625 15:57:32.700873   36162 main.go:141] libmachine: (ha-674765-m03)       <target port='0'/>
	I0625 15:57:32.700882   36162 main.go:141] libmachine: (ha-674765-m03)     </serial>
	I0625 15:57:32.700892   36162 main.go:141] libmachine: (ha-674765-m03)     <console type='pty'>
	I0625 15:57:32.700913   36162 main.go:141] libmachine: (ha-674765-m03)       <target type='serial' port='0'/>
	I0625 15:57:32.700932   36162 main.go:141] libmachine: (ha-674765-m03)     </console>
	I0625 15:57:32.700944   36162 main.go:141] libmachine: (ha-674765-m03)     <rng model='virtio'>
	I0625 15:57:32.700953   36162 main.go:141] libmachine: (ha-674765-m03)       <backend model='random'>/dev/random</backend>
	I0625 15:57:32.700962   36162 main.go:141] libmachine: (ha-674765-m03)     </rng>
	I0625 15:57:32.700966   36162 main.go:141] libmachine: (ha-674765-m03)     
	I0625 15:57:32.700973   36162 main.go:141] libmachine: (ha-674765-m03)     
	I0625 15:57:32.700978   36162 main.go:141] libmachine: (ha-674765-m03)   </devices>
	I0625 15:57:32.700993   36162 main.go:141] libmachine: (ha-674765-m03) </domain>
	I0625 15:57:32.700999   36162 main.go:141] libmachine: (ha-674765-m03) 
	I0625 15:57:32.707312   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:06:25:01 in network default
	I0625 15:57:32.707869   36162 main.go:141] libmachine: (ha-674765-m03) Ensuring networks are active...
	I0625 15:57:32.707896   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:32.708594   36162 main.go:141] libmachine: (ha-674765-m03) Ensuring network default is active
	I0625 15:57:32.708856   36162 main.go:141] libmachine: (ha-674765-m03) Ensuring network mk-ha-674765 is active
	I0625 15:57:32.709236   36162 main.go:141] libmachine: (ha-674765-m03) Getting domain xml...
	I0625 15:57:32.709886   36162 main.go:141] libmachine: (ha-674765-m03) Creating domain...
	I0625 15:57:33.899693   36162 main.go:141] libmachine: (ha-674765-m03) Waiting to get IP...
	I0625 15:57:33.900360   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:33.900728   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:33.900768   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:33.900704   36953 retry.go:31] will retry after 189.370323ms: waiting for machine to come up
	I0625 15:57:34.092001   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:34.092489   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:34.092518   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:34.092447   36953 retry.go:31] will retry after 291.630508ms: waiting for machine to come up
	I0625 15:57:34.386127   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:34.386650   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:34.386683   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:34.386620   36953 retry.go:31] will retry after 457.585129ms: waiting for machine to come up
	I0625 15:57:34.845906   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:34.846363   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:34.846393   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:34.846314   36953 retry.go:31] will retry after 422.838014ms: waiting for machine to come up
	I0625 15:57:35.270927   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:35.271439   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:35.271489   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:35.271391   36953 retry.go:31] will retry after 708.280663ms: waiting for machine to come up
	I0625 15:57:35.981141   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:35.981691   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:35.981716   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:35.981645   36953 retry.go:31] will retry after 612.083185ms: waiting for machine to come up
	I0625 15:57:36.595308   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:36.595771   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:36.595799   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:36.595721   36953 retry.go:31] will retry after 1.0908696s: waiting for machine to come up
	I0625 15:57:37.688174   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:37.688629   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:37.688657   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:37.688557   36953 retry.go:31] will retry after 1.438169506s: waiting for machine to come up
	I0625 15:57:39.128827   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:39.129230   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:39.129260   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:39.129180   36953 retry.go:31] will retry after 1.56479191s: waiting for machine to come up
	I0625 15:57:40.696115   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:40.696651   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:40.696685   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:40.696588   36953 retry.go:31] will retry after 2.133683184s: waiting for machine to come up
	I0625 15:57:42.831736   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:42.832207   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:42.832234   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:42.832164   36953 retry.go:31] will retry after 2.653932997s: waiting for machine to come up
	I0625 15:57:45.487150   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:45.487513   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:45.487538   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:45.487478   36953 retry.go:31] will retry after 2.909129093s: waiting for machine to come up
	I0625 15:57:48.398685   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:48.399063   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:48.399085   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:48.399019   36953 retry.go:31] will retry after 3.985733944s: waiting for machine to come up
	I0625 15:57:52.386600   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.387072   36162 main.go:141] libmachine: (ha-674765-m03) Found IP for machine: 192.168.39.77
	I0625 15:57:52.387090   36162 main.go:141] libmachine: (ha-674765-m03) Reserving static IP address...
	I0625 15:57:52.387100   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has current primary IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.387489   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find host DHCP lease matching {name: "ha-674765-m03", mac: "52:54:00:82:ed:f4", ip: "192.168.39.77"} in network mk-ha-674765
	I0625 15:57:52.457146   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Getting to WaitForSSH function...
	I0625 15:57:52.457178   36162 main.go:141] libmachine: (ha-674765-m03) Reserved static IP address: 192.168.39.77
	I0625 15:57:52.457191   36162 main.go:141] libmachine: (ha-674765-m03) Waiting for SSH to be available...
	I0625 15:57:52.459845   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.460386   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:minikube Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:52.460410   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.460600   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Using SSH client type: external
	I0625 15:57:52.460631   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa (-rw-------)
	I0625 15:57:52.460668   36162 main.go:141] libmachine: (ha-674765-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0625 15:57:52.460685   36162 main.go:141] libmachine: (ha-674765-m03) DBG | About to run SSH command:
	I0625 15:57:52.460700   36162 main.go:141] libmachine: (ha-674765-m03) DBG | exit 0
	I0625 15:57:52.590423   36162 main.go:141] libmachine: (ha-674765-m03) DBG | SSH cmd err, output: <nil>: 
	I0625 15:57:52.590753   36162 main.go:141] libmachine: (ha-674765-m03) KVM machine creation complete!
	I0625 15:57:52.591027   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetConfigRaw
	I0625 15:57:52.591644   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:52.591853   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:52.592023   36162 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0625 15:57:52.592039   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetState
	I0625 15:57:52.593296   36162 main.go:141] libmachine: Detecting operating system of created instance...
	I0625 15:57:52.593309   36162 main.go:141] libmachine: Waiting for SSH to be available...
	I0625 15:57:52.593314   36162 main.go:141] libmachine: Getting to WaitForSSH function...
	I0625 15:57:52.593320   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:52.595498   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.595852   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:52.595878   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.595996   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:52.596158   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.596333   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.596476   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:52.596622   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:57:52.596866   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0625 15:57:52.596883   36162 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0625 15:57:52.713626   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 15:57:52.713648   36162 main.go:141] libmachine: Detecting the provisioner...
	I0625 15:57:52.713659   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:52.716664   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.717110   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:52.717136   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.717312   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:52.717486   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.717638   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.717774   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:52.717917   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:57:52.718128   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0625 15:57:52.718147   36162 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0625 15:57:52.830947   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0625 15:57:52.831013   36162 main.go:141] libmachine: found compatible host: buildroot
	I0625 15:57:52.831026   36162 main.go:141] libmachine: Provisioning with buildroot...
	I0625 15:57:52.831037   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetMachineName
	I0625 15:57:52.831265   36162 buildroot.go:166] provisioning hostname "ha-674765-m03"
	I0625 15:57:52.831290   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetMachineName
	I0625 15:57:52.831466   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:52.834163   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.834616   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:52.834642   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.834774   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:52.834930   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.835079   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.835204   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:52.835359   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:57:52.835508   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0625 15:57:52.835520   36162 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-674765-m03 && echo "ha-674765-m03" | sudo tee /etc/hostname
	I0625 15:57:52.960308   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-674765-m03
	
	I0625 15:57:52.960331   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:52.962661   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.962978   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:52.963006   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.963205   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:52.963393   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.963535   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.963676   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:52.963819   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:57:52.963965   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0625 15:57:52.963980   36162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-674765-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-674765-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-674765-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 15:57:53.091732   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 15:57:53.091760   36162 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 15:57:53.091793   36162 buildroot.go:174] setting up certificates
	I0625 15:57:53.091814   36162 provision.go:84] configureAuth start
	I0625 15:57:53.091837   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetMachineName
	I0625 15:57:53.092146   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 15:57:53.094875   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.095285   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.095314   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.095503   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:53.097543   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.097877   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.097905   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.097989   36162 provision.go:143] copyHostCerts
	I0625 15:57:53.098031   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 15:57:53.098081   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem, removing ...
	I0625 15:57:53.098092   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 15:57:53.098164   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 15:57:53.098262   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 15:57:53.098298   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem, removing ...
	I0625 15:57:53.098305   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 15:57:53.098353   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 15:57:53.098430   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 15:57:53.098461   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem, removing ...
	I0625 15:57:53.098486   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 15:57:53.098522   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 15:57:53.098590   36162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.ha-674765-m03 san=[127.0.0.1 192.168.39.77 ha-674765-m03 localhost minikube]
	I0625 15:57:53.311582   36162 provision.go:177] copyRemoteCerts
	I0625 15:57:53.311635   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 15:57:53.311653   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:53.314426   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.314761   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.314794   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.315006   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:53.315210   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:53.315380   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:53.315572   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 15:57:53.405563   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0625 15:57:53.405628   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 15:57:53.430960   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0625 15:57:53.431019   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0625 15:57:53.454267   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0625 15:57:53.454322   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0625 15:57:53.477425   36162 provision.go:87] duration metric: took 385.597394ms to configureAuth
	I0625 15:57:53.477458   36162 buildroot.go:189] setting minikube options for container-runtime
	I0625 15:57:53.477688   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:57:53.477753   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:53.480334   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.480689   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.480715   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.480903   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:53.481116   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:53.481305   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:53.481413   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:53.481638   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:57:53.481794   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0625 15:57:53.481809   36162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 15:57:53.760941   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 15:57:53.760970   36162 main.go:141] libmachine: Checking connection to Docker...
	I0625 15:57:53.760978   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetURL
	I0625 15:57:53.762294   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Using libvirt version 6000000
	I0625 15:57:53.764612   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.765018   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.765045   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.765213   36162 main.go:141] libmachine: Docker is up and running!
	I0625 15:57:53.765226   36162 main.go:141] libmachine: Reticulating splines...
	I0625 15:57:53.765232   36162 client.go:171] duration metric: took 21.602135409s to LocalClient.Create
	I0625 15:57:53.765251   36162 start.go:167] duration metric: took 21.602194985s to libmachine.API.Create "ha-674765"
	I0625 15:57:53.765260   36162 start.go:293] postStartSetup for "ha-674765-m03" (driver="kvm2")
	I0625 15:57:53.765268   36162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 15:57:53.765283   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:53.765514   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 15:57:53.765534   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:53.767703   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.768140   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.768154   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.768286   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:53.768453   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:53.768577   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:53.768673   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 15:57:53.857525   36162 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 15:57:53.861825   36162 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 15:57:53.861843   36162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 15:57:53.861905   36162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 15:57:53.861985   36162 filesync.go:149] local asset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> 212392.pem in /etc/ssl/certs
	I0625 15:57:53.861997   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /etc/ssl/certs/212392.pem
	I0625 15:57:53.862111   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0625 15:57:53.871438   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /etc/ssl/certs/212392.pem (1708 bytes)
	I0625 15:57:53.895481   36162 start.go:296] duration metric: took 130.210649ms for postStartSetup
	I0625 15:57:53.895531   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetConfigRaw
	I0625 15:57:53.896073   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 15:57:53.898403   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.898757   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.898780   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.899085   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:57:53.899301   36162 start.go:128] duration metric: took 21.754290804s to createHost
	I0625 15:57:53.899326   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:53.901351   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.901656   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.901678   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.901842   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:53.901997   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:53.902160   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:53.902294   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:53.902448   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:57:53.902621   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0625 15:57:53.902642   36162 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0625 15:57:54.014840   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719331073.982993173
	
	I0625 15:57:54.014869   36162 fix.go:216] guest clock: 1719331073.982993173
	I0625 15:57:54.014880   36162 fix.go:229] Guest: 2024-06-25 15:57:53.982993173 +0000 UTC Remote: 2024-06-25 15:57:53.899314383 +0000 UTC m=+149.267137306 (delta=83.67879ms)
	I0625 15:57:54.014901   36162 fix.go:200] guest clock delta is within tolerance: 83.67879ms
	I0625 15:57:54.014909   36162 start.go:83] releasing machines lock for "ha-674765-m03", held for 21.86999563s
	I0625 15:57:54.014934   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:54.015185   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 15:57:54.017854   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:54.018181   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:54.018211   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:54.020506   36162 out.go:177] * Found network options:
	I0625 15:57:54.021955   36162 out.go:177]   - NO_PROXY=192.168.39.128,192.168.39.53
	W0625 15:57:54.023329   36162 proxy.go:119] fail to check proxy env: Error ip not in block
	W0625 15:57:54.023346   36162 proxy.go:119] fail to check proxy env: Error ip not in block
	I0625 15:57:54.023384   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:54.023829   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:54.023991   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:54.024065   36162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 15:57:54.024107   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	W0625 15:57:54.024177   36162 proxy.go:119] fail to check proxy env: Error ip not in block
	W0625 15:57:54.024191   36162 proxy.go:119] fail to check proxy env: Error ip not in block
	I0625 15:57:54.024231   36162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 15:57:54.024243   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:54.026696   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:54.026882   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:54.027121   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:54.027151   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:54.027240   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:54.027372   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:54.027399   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:54.027441   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:54.027524   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:54.027592   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:54.027677   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:54.027744   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 15:57:54.027803   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:54.027910   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 15:57:54.258595   36162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0625 15:57:54.267463   36162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 15:57:54.267536   36162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 15:57:54.283400   36162 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0625 15:57:54.283418   36162 start.go:494] detecting cgroup driver to use...
	I0625 15:57:54.283474   36162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 15:57:54.301784   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 15:57:54.315951   36162 docker.go:217] disabling cri-docker service (if available) ...
	I0625 15:57:54.315991   36162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 15:57:54.330200   36162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 15:57:54.343260   36162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 15:57:54.458931   36162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 15:57:54.618633   36162 docker.go:233] disabling docker service ...
	I0625 15:57:54.618710   36162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 15:57:54.633242   36162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 15:57:54.646486   36162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 15:57:54.779838   36162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 15:57:54.903681   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 15:57:54.917606   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 15:57:54.939193   36162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0625 15:57:54.939255   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:57:54.950489   36162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 15:57:54.950553   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:57:54.961722   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:57:54.972476   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:57:54.982665   36162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 15:57:54.993259   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:57:55.003467   36162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:57:55.020931   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:57:55.031388   36162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 15:57:55.040605   36162 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0625 15:57:55.040648   36162 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0625 15:57:55.053598   36162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 15:57:55.063355   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:57:55.184293   36162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 15:57:55.333811   36162 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 15:57:55.333870   36162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 15:57:55.339038   36162 start.go:562] Will wait 60s for crictl version
	I0625 15:57:55.339088   36162 ssh_runner.go:195] Run: which crictl
	I0625 15:57:55.342848   36162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 15:57:55.381279   36162 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 15:57:55.381365   36162 ssh_runner.go:195] Run: crio --version
	I0625 15:57:55.409289   36162 ssh_runner.go:195] Run: crio --version
	I0625 15:57:55.447658   36162 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0625 15:57:55.448979   36162 out.go:177]   - env NO_PROXY=192.168.39.128
	I0625 15:57:55.450163   36162 out.go:177]   - env NO_PROXY=192.168.39.128,192.168.39.53
	I0625 15:57:55.451313   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 15:57:55.453968   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:55.454320   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:55.454344   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:55.454585   36162 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0625 15:57:55.458825   36162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 15:57:55.471650   36162 mustload.go:65] Loading cluster: ha-674765
	I0625 15:57:55.471847   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:57:55.472082   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:57:55.472119   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:57:55.486939   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42383
	I0625 15:57:55.487364   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:57:55.487847   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:57:55.487867   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:57:55.488184   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:57:55.488359   36162 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 15:57:55.489897   36162 host.go:66] Checking if "ha-674765" exists ...
	I0625 15:57:55.490184   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:57:55.490215   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:57:55.504303   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0625 15:57:55.504624   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:57:55.505032   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:57:55.505052   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:57:55.505333   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:57:55.505516   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:57:55.505649   36162 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765 for IP: 192.168.39.77
	I0625 15:57:55.505671   36162 certs.go:194] generating shared ca certs ...
	I0625 15:57:55.505692   36162 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:57:55.505823   36162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 15:57:55.505871   36162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 15:57:55.505883   36162 certs.go:256] generating profile certs ...
	I0625 15:57:55.505973   36162 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key
	I0625 15:57:55.506004   36162 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.bc4554f3
	I0625 15:57:55.506022   36162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.bc4554f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.53 192.168.39.77 192.168.39.254]
	I0625 15:57:55.648828   36162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.bc4554f3 ...
	I0625 15:57:55.648854   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.bc4554f3: {Name:mkb9321824526d9fcb14c00a8fe4d2304bf300a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:57:55.649008   36162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.bc4554f3 ...
	I0625 15:57:55.649019   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.bc4554f3: {Name:mk876eecb0530649eecba078952602b65db732ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:57:55.649083   36162 certs.go:381] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.bc4554f3 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt
	I0625 15:57:55.649198   36162 certs.go:385] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.bc4554f3 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key
	I0625 15:57:55.649323   36162 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key
	I0625 15:57:55.649338   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0625 15:57:55.649350   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0625 15:57:55.649363   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0625 15:57:55.649375   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0625 15:57:55.649388   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0625 15:57:55.649399   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0625 15:57:55.649411   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0625 15:57:55.649423   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0625 15:57:55.649463   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem (1338 bytes)
	W0625 15:57:55.649488   36162 certs.go:480] ignoring /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239_empty.pem, impossibly tiny 0 bytes
	I0625 15:57:55.649497   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 15:57:55.649529   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 15:57:55.649560   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 15:57:55.649591   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 15:57:55.649647   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem (1708 bytes)
	I0625 15:57:55.649685   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem -> /usr/share/ca-certificates/21239.pem
	I0625 15:57:55.649701   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /usr/share/ca-certificates/212392.pem
	I0625 15:57:55.649715   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:57:55.649745   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:57:55.652612   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:57:55.652960   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:57:55.652982   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:57:55.653109   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:57:55.653285   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:57:55.653414   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:57:55.653539   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:57:55.730727   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0625 15:57:55.735497   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0625 15:57:55.746777   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0625 15:57:55.750883   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0625 15:57:55.762298   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0625 15:57:55.766477   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0625 15:57:55.776696   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0625 15:57:55.781544   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0625 15:57:55.791265   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0625 15:57:55.795550   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0625 15:57:55.805049   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0625 15:57:55.809045   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0625 15:57:55.819392   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 15:57:55.845523   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 15:57:55.869662   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 15:57:55.892727   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 15:57:55.916900   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0625 15:57:55.940788   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0625 15:57:55.964303   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 15:57:55.988802   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0625 15:57:56.012333   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem --> /usr/share/ca-certificates/21239.pem (1338 bytes)
	I0625 15:57:56.035727   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /usr/share/ca-certificates/212392.pem (1708 bytes)
	I0625 15:57:56.058836   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 15:57:56.082713   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0625 15:57:56.098551   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0625 15:57:56.115185   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0625 15:57:56.131213   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0625 15:57:56.147568   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0625 15:57:56.165389   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0625 15:57:56.182891   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0625 15:57:56.200285   36162 ssh_runner.go:195] Run: openssl version
	I0625 15:57:56.206203   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21239.pem && ln -fs /usr/share/ca-certificates/21239.pem /etc/ssl/certs/21239.pem"
	I0625 15:57:56.219074   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21239.pem
	I0625 15:57:56.223771   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 15:57:56.223812   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21239.pem
	I0625 15:57:56.230373   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21239.pem /etc/ssl/certs/51391683.0"
	I0625 15:57:56.242946   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212392.pem && ln -fs /usr/share/ca-certificates/212392.pem /etc/ssl/certs/212392.pem"
	I0625 15:57:56.255177   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212392.pem
	I0625 15:57:56.259689   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 15:57:56.259747   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212392.pem
	I0625 15:57:56.265463   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/212392.pem /etc/ssl/certs/3ec20f2e.0"
	I0625 15:57:56.277505   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 15:57:56.289907   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:57:56.294823   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:57:56.294870   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:57:56.300383   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 15:57:56.311084   36162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 15:57:56.314987   36162 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0625 15:57:56.315041   36162 kubeadm.go:928] updating node {m03 192.168.39.77 8443 v1.30.2 crio true true} ...
	I0625 15:57:56.315127   36162 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-674765-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 15:57:56.315152   36162 kube-vip.go:115] generating kube-vip config ...
	I0625 15:57:56.315186   36162 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0625 15:57:56.332544   36162 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0625 15:57:56.332596   36162 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0625 15:57:56.332645   36162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0625 15:57:56.342307   36162 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0625 15:57:56.342357   36162 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0625 15:57:56.352425   36162 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0625 15:57:56.352452   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0625 15:57:56.352471   36162 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0625 15:57:56.352488   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0625 15:57:56.352501   36162 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0625 15:57:56.352515   36162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0625 15:57:56.352550   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 15:57:56.352553   36162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0625 15:57:56.357066   36162 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0625 15:57:56.357093   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0625 15:57:56.384153   36162 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0625 15:57:56.384188   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0625 15:57:56.384219   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0625 15:57:56.384307   36162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0625 15:57:56.440143   36162 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0625 15:57:56.440181   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0625 15:57:57.210538   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0625 15:57:57.220712   36162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0625 15:57:57.238402   36162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 15:57:57.256107   36162 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0625 15:57:57.273920   36162 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0625 15:57:57.277976   36162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 15:57:57.292015   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:57:57.415561   36162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 15:57:57.433434   36162 host.go:66] Checking if "ha-674765" exists ...
	I0625 15:57:57.433886   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:57:57.433944   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:57:57.449349   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33363
	I0625 15:57:57.449733   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:57:57.450211   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:57:57.450232   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:57:57.450629   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:57:57.450828   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:57:57.450975   36162 start.go:316] joinCluster: &{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 15:57:57.451136   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0625 15:57:57.451169   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:57:57.454112   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:57:57.454577   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:57:57.454605   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:57:57.454752   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:57:57.454933   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:57:57.455098   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:57:57.455256   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:57:57.615676   36162 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:57:57.615727   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token sxxgbm.6pnydo1y71smfsmd --discovery-token-ca-cert-hash sha256:df4523a4334c80aff4a7c2fc7b4a73691744a675a28cdb3d4468287f693ab03d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-674765-m03 --control-plane --apiserver-advertise-address=192.168.39.77 --apiserver-bind-port=8443"
	I0625 15:58:19.941664   36162 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token sxxgbm.6pnydo1y71smfsmd --discovery-token-ca-cert-hash sha256:df4523a4334c80aff4a7c2fc7b4a73691744a675a28cdb3d4468287f693ab03d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-674765-m03 --control-plane --apiserver-advertise-address=192.168.39.77 --apiserver-bind-port=8443": (22.325905156s)
	I0625 15:58:19.941700   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0625 15:58:20.572350   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-674765-m03 minikube.k8s.io/updated_at=2024_06_25T15_58_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b minikube.k8s.io/name=ha-674765 minikube.k8s.io/primary=false
	I0625 15:58:20.688902   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-674765-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0625 15:58:20.799607   36162 start.go:318] duration metric: took 23.348630958s to joinCluster
	I0625 15:58:20.799660   36162 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:58:20.800004   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:58:20.801104   36162 out.go:177] * Verifying Kubernetes components...
	I0625 15:58:20.802436   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:58:21.103357   36162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 15:58:21.125097   36162 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:58:21.125357   36162 kapi.go:59] client config for ha-674765: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.crt", KeyFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key", CAFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0625 15:58:21.125426   36162 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.128:8443
	I0625 15:58:21.125637   36162 node_ready.go:35] waiting up to 6m0s for node "ha-674765-m03" to be "Ready" ...
	I0625 15:58:21.125711   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:21.125721   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:21.125732   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:21.125740   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:21.129364   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:21.626179   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:21.626199   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:21.626209   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:21.626213   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:21.636551   36162 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0625 15:58:22.126587   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:22.126607   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:22.126615   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:22.126620   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:22.130419   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:22.626424   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:22.626449   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:22.626460   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:22.626463   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:22.630458   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:23.126567   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:23.126592   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:23.126604   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:23.126610   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:23.130434   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:23.131066   36162 node_ready.go:53] node "ha-674765-m03" has status "Ready":"False"
	I0625 15:58:23.626527   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:23.626550   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:23.626560   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:23.626564   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:23.630032   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:24.125913   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:24.125937   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:24.125949   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:24.125957   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:24.128997   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:24.626825   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:24.626846   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:24.626854   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:24.626859   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:24.630142   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:25.126559   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:25.126580   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:25.126588   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:25.126592   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:25.129571   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:25.626433   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:25.626454   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:25.626464   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:25.626483   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:25.629930   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:25.630777   36162 node_ready.go:53] node "ha-674765-m03" has status "Ready":"False"
	I0625 15:58:26.125981   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:26.126003   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:26.126012   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:26.126016   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:26.130081   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:26.626721   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:26.626744   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:26.626756   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:26.626761   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:26.630683   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:27.126830   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:27.126855   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:27.126867   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:27.126873   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:27.130321   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:27.626460   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:27.626500   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:27.626509   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:27.626513   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:27.629237   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:28.126111   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:28.126132   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.126140   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.126145   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.129840   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:28.130827   36162 node_ready.go:53] node "ha-674765-m03" has status "Ready":"False"
	I0625 15:58:28.626156   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:28.626176   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.626185   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.626188   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.630375   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:28.631142   36162 node_ready.go:49] node "ha-674765-m03" has status "Ready":"True"
	I0625 15:58:28.631165   36162 node_ready.go:38] duration metric: took 7.505510142s for node "ha-674765-m03" to be "Ready" ...
	I0625 15:58:28.631177   36162 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 15:58:28.631252   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:58:28.631267   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.631276   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.631280   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.639163   36162 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0625 15:58:28.645727   36162 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-28db5" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.645795   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-28db5
	I0625 15:58:28.645807   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.645817   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.645823   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.648395   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:28.649046   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:28.649062   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.649072   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.649082   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.651681   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:28.652234   36162 pod_ready.go:92] pod "coredns-7db6d8ff4d-28db5" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:28.652252   36162 pod_ready.go:81] duration metric: took 6.503502ms for pod "coredns-7db6d8ff4d-28db5" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.652263   36162 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-84zkt" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.652320   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-84zkt
	I0625 15:58:28.652330   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.652340   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.652350   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.661307   36162 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0625 15:58:28.661992   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:28.662006   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.662016   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.662021   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.684062   36162 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0625 15:58:28.684759   36162 pod_ready.go:92] pod "coredns-7db6d8ff4d-84zkt" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:28.684776   36162 pod_ready.go:81] duration metric: took 32.502068ms for pod "coredns-7db6d8ff4d-84zkt" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.684789   36162 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.684853   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765
	I0625 15:58:28.684864   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.684874   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.684882   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.692708   36162 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0625 15:58:28.693424   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:28.693435   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.693442   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.693446   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.702178   36162 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0625 15:58:28.702897   36162 pod_ready.go:92] pod "etcd-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:28.702915   36162 pod_ready.go:81] duration metric: took 18.118053ms for pod "etcd-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.702926   36162 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.702975   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:58:28.702987   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.702997   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.703007   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.711387   36162 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0625 15:58:28.712046   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:28.712067   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.712077   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.712082   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.718330   36162 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0625 15:58:28.718897   36162 pod_ready.go:92] pod "etcd-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:28.718914   36162 pod_ready.go:81] duration metric: took 15.981652ms for pod "etcd-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.718922   36162 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.826168   36162 request.go:629] Waited for 107.187135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:28.826225   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:28.826230   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.826238   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.826244   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.829951   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:29.026917   36162 request.go:629] Waited for 196.356128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:29.026986   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:29.026992   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:29.026999   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:29.027002   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:29.030159   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:29.226523   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:29.226543   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:29.226551   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:29.226555   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:29.230175   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:29.427146   36162 request.go:629] Waited for 196.324759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:29.427202   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:29.427207   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:29.427215   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:29.427219   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:29.430166   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:29.719996   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:29.720014   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:29.720022   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:29.720026   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:29.723890   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:29.826403   36162 request.go:629] Waited for 101.178342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:29.826448   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:29.826453   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:29.826460   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:29.826491   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:29.829211   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:30.219587   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:30.219611   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:30.219622   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:30.219627   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:30.223664   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:30.226852   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:30.226869   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:30.226877   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:30.226884   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:30.230088   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:30.719224   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:30.719254   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:30.719265   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:30.719270   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:30.722867   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:30.723549   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:30.723565   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:30.723575   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:30.723580   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:30.726547   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:30.727281   36162 pod_ready.go:102] pod "etcd-ha-674765-m03" in "kube-system" namespace has status "Ready":"False"
	I0625 15:58:31.219144   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:31.219169   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:31.219179   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:31.219186   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:31.223298   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:31.224233   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:31.224252   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:31.224263   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:31.224269   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:31.227155   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:31.720125   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:31.720150   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:31.720162   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:31.720167   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:31.723659   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:31.724457   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:31.724475   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:31.724485   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:31.724493   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:31.726925   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:32.220033   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:32.220068   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:32.220080   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:32.220088   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:32.224872   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:32.225501   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:32.225514   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:32.225525   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:32.225529   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:32.228598   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:32.719227   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:32.719258   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:32.719271   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:32.719276   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:32.723163   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:32.723990   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:32.724009   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:32.724021   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:32.724027   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:32.727211   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:32.727892   36162 pod_ready.go:102] pod "etcd-ha-674765-m03" in "kube-system" namespace has status "Ready":"False"
	I0625 15:58:33.219201   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:33.219230   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:33.219241   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:33.219248   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:33.223431   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:33.224332   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:33.224349   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:33.224358   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:33.224361   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:33.227456   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:33.720112   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:33.720135   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:33.720146   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:33.720152   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:33.724243   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:33.724982   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:33.724996   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:33.725004   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:33.725008   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:33.727577   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:34.219068   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:34.219092   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.219101   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.219106   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.222789   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:34.223667   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:34.223685   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.223695   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.223700   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.226290   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:34.719177   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:34.719205   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.719216   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.719222   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.723967   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:34.724690   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:34.724706   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.724713   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.724718   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.727196   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:34.727673   36162 pod_ready.go:92] pod "etcd-ha-674765-m03" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:34.727695   36162 pod_ready.go:81] duration metric: took 6.008765887s for pod "etcd-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:34.727719   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:34.727787   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765
	I0625 15:58:34.727796   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.727809   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.727817   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.730233   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:34.731397   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:34.731415   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.731423   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.731428   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.733788   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:34.734356   36162 pod_ready.go:92] pod "kube-apiserver-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:34.734374   36162 pod_ready.go:81] duration metric: took 6.644453ms for pod "kube-apiserver-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:34.734382   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:34.734438   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:58:34.734449   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.734459   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.734487   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.736696   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:34.737264   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:34.737283   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.737293   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.737300   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.739591   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:34.740138   36162 pod_ready.go:92] pod "kube-apiserver-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:34.740156   36162 pod_ready.go:81] duration metric: took 5.766096ms for pod "kube-apiserver-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:34.740166   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:34.826542   36162 request.go:629] Waited for 86.319241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m03
	I0625 15:58:34.826615   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m03
	I0625 15:58:34.826623   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.826630   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.826637   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.830069   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:35.026189   36162 request.go:629] Waited for 195.115459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:35.026250   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:35.026255   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:35.026262   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:35.026266   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:35.030657   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:35.031158   36162 pod_ready.go:92] pod "kube-apiserver-ha-674765-m03" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:35.031176   36162 pod_ready.go:81] duration metric: took 291.001645ms for pod "kube-apiserver-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:35.031185   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:35.226547   36162 request.go:629] Waited for 195.302496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765
	I0625 15:58:35.226619   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765
	I0625 15:58:35.226626   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:35.226635   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:35.226641   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:35.230134   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:35.427026   36162 request.go:629] Waited for 196.04705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:35.427114   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:35.427123   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:35.427137   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:35.427143   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:35.430233   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:35.430896   36162 pod_ready.go:92] pod "kube-controller-manager-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:35.430914   36162 pod_ready.go:81] duration metric: took 399.722704ms for pod "kube-controller-manager-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:35.430923   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:35.626668   36162 request.go:629] Waited for 195.688648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765-m02
	I0625 15:58:35.626755   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765-m02
	I0625 15:58:35.626766   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:35.626777   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:35.626785   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:35.630604   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:35.826972   36162 request.go:629] Waited for 195.349311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:35.827023   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:35.827029   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:35.827040   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:35.827045   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:35.830575   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:35.831239   36162 pod_ready.go:92] pod "kube-controller-manager-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:35.831260   36162 pod_ready.go:81] duration metric: took 400.329985ms for pod "kube-controller-manager-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:35.831273   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:36.026223   36162 request.go:629] Waited for 194.87977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765-m03
	I0625 15:58:36.026285   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765-m03
	I0625 15:58:36.026294   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:36.026314   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:36.026334   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:36.029365   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:36.226358   36162 request.go:629] Waited for 196.299154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:36.226430   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:36.226441   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:36.226453   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:36.226460   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:36.230009   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:36.230751   36162 pod_ready.go:92] pod "kube-controller-manager-ha-674765-m03" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:36.230772   36162 pod_ready.go:81] duration metric: took 399.490216ms for pod "kube-controller-manager-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:36.230785   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lsmft" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:36.426859   36162 request.go:629] Waited for 195.997385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lsmft
	I0625 15:58:36.426956   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lsmft
	I0625 15:58:36.426968   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:36.426975   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:36.426982   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:36.429723   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:36.627217   36162 request.go:629] Waited for 196.650446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:36.627314   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:36.627325   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:36.627337   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:36.627350   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:36.630619   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:36.631438   36162 pod_ready.go:92] pod "kube-proxy-lsmft" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:36.631456   36162 pod_ready.go:81] duration metric: took 400.664094ms for pod "kube-proxy-lsmft" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:36.631464   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rh9n5" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:36.826547   36162 request.go:629] Waited for 195.025136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rh9n5
	I0625 15:58:36.826650   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rh9n5
	I0625 15:58:36.826663   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:36.826675   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:36.826683   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:36.829983   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:37.027064   36162 request.go:629] Waited for 196.337499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:37.027150   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:37.027161   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:37.027171   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:37.027176   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:37.030113   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:37.030746   36162 pod_ready.go:92] pod "kube-proxy-rh9n5" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:37.030765   36162 pod_ready.go:81] duration metric: took 399.29603ms for pod "kube-proxy-rh9n5" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:37.030774   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-swfsx" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:37.227213   36162 request.go:629] Waited for 196.369052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-swfsx
	I0625 15:58:37.227268   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-swfsx
	I0625 15:58:37.227273   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:37.227281   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:37.227286   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:37.230330   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:37.426492   36162 request.go:629] Waited for 195.357462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:37.426543   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:37.426548   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:37.426555   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:37.426560   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:37.429824   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:37.430641   36162 pod_ready.go:92] pod "kube-proxy-swfsx" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:37.430661   36162 pod_ready.go:81] duration metric: took 399.881552ms for pod "kube-proxy-swfsx" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:37.430669   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:37.627091   36162 request.go:629] Waited for 196.368488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765
	I0625 15:58:37.627159   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765
	I0625 15:58:37.627180   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:37.627195   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:37.627200   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:37.630762   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:37.827002   36162 request.go:629] Waited for 195.371695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:37.827078   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:37.827084   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:37.827092   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:37.827099   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:37.830911   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:37.831841   36162 pod_ready.go:92] pod "kube-scheduler-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:37.831860   36162 pod_ready.go:81] duration metric: took 401.186016ms for pod "kube-scheduler-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:37.831869   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:38.026546   36162 request.go:629] Waited for 194.603271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765-m02
	I0625 15:58:38.026599   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765-m02
	I0625 15:58:38.026603   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:38.026609   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:38.026614   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:38.029502   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:38.226557   36162 request.go:629] Waited for 196.38695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:38.226648   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:38.226689   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:38.226705   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:38.226709   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:38.230980   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:38.232238   36162 pod_ready.go:92] pod "kube-scheduler-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:38.232276   36162 pod_ready.go:81] duration metric: took 400.379729ms for pod "kube-scheduler-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:38.232286   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:38.426342   36162 request.go:629] Waited for 193.98135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765-m03
	I0625 15:58:38.426430   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765-m03
	I0625 15:58:38.426439   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:38.426453   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:38.426462   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:38.429567   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:38.626312   36162 request.go:629] Waited for 196.10206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:38.626366   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:38.626372   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:38.626379   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:38.626383   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:38.630649   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:38.631277   36162 pod_ready.go:92] pod "kube-scheduler-ha-674765-m03" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:38.631296   36162 pod_ready.go:81] duration metric: took 399.000574ms for pod "kube-scheduler-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:38.631310   36162 pod_ready.go:38] duration metric: took 10.000120706s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 15:58:38.631330   36162 api_server.go:52] waiting for apiserver process to appear ...
	I0625 15:58:38.631388   36162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 15:58:38.649832   36162 api_server.go:72] duration metric: took 17.850151268s to wait for apiserver process to appear ...
	I0625 15:58:38.649848   36162 api_server.go:88] waiting for apiserver healthz status ...
	I0625 15:58:38.649862   36162 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0625 15:58:38.656751   36162 api_server.go:279] https://192.168.39.128:8443/healthz returned 200:
	ok
	I0625 15:58:38.656819   36162 round_trippers.go:463] GET https://192.168.39.128:8443/version
	I0625 15:58:38.656831   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:38.656841   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:38.656850   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:38.658054   36162 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0625 15:58:38.658108   36162 api_server.go:141] control plane version: v1.30.2
	I0625 15:58:38.658123   36162 api_server.go:131] duration metric: took 8.269474ms to wait for apiserver health ...
	I0625 15:58:38.658130   36162 system_pods.go:43] waiting for kube-system pods to appear ...
	I0625 15:58:38.826522   36162 request.go:629] Waited for 168.332415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:58:38.826620   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:58:38.826631   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:38.826642   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:38.826651   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:38.833753   36162 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0625 15:58:38.841234   36162 system_pods.go:59] 24 kube-system pods found
	I0625 15:58:38.841260   36162 system_pods.go:61] "coredns-7db6d8ff4d-28db5" [1426e4a3-2f25-47e9-9b28-b23a81a3a19a] Running
	I0625 15:58:38.841267   36162 system_pods.go:61] "coredns-7db6d8ff4d-84zkt" [2f6426f8-a0c4-470c-b2b1-b62fa304c078] Running
	I0625 15:58:38.841271   36162 system_pods.go:61] "etcd-ha-674765" [a8f7d82c-8fc7-4190-99c2-0bedc24d8f4f] Running
	I0625 15:58:38.841276   36162 system_pods.go:61] "etcd-ha-674765-m02" [e3f94832-96fe-4bbf-8c53-86bab692b6a9] Running
	I0625 15:58:38.841281   36162 system_pods.go:61] "etcd-ha-674765-m03" [19a0a3e5-4f97-4ec1-9131-2cb687d36d77] Running
	I0625 15:58:38.841286   36162 system_pods.go:61] "kindnet-kkgdq" [cfb408ee-0f73-4537-87fb-fad3d2b1f3f1] Running
	I0625 15:58:38.841291   36162 system_pods.go:61] "kindnet-ntq77" [37736a9f-5b4c-421c-9027-81e961ab8550] Running
	I0625 15:58:38.841295   36162 system_pods.go:61] "kindnet-px4dn" [27ef663b-4867-4757-9e02-5086d4875471] Running
	I0625 15:58:38.841299   36162 system_pods.go:61] "kube-apiserver-ha-674765" [594e5a19-d80b-4b26-8c91-a8475fb99630] Running
	I0625 15:58:38.841304   36162 system_pods.go:61] "kube-apiserver-ha-674765-m02" [e00ad102-e252-49e9-82e4-b466ae4eb7b2] Running
	I0625 15:58:38.841309   36162 system_pods.go:61] "kube-apiserver-ha-674765-m03" [90f8d49f-694e-4872-9a70-c1211b79cefd] Running
	I0625 15:58:38.841314   36162 system_pods.go:61] "kube-controller-manager-ha-674765" [5f4f1e7d-f796-4762-9f33-61755c0daef3] Running
	I0625 15:58:38.841322   36162 system_pods.go:61] "kube-controller-manager-ha-674765-m02" [acb4b5ca-b29e-4866-be68-eb4c6425463d] Running
	I0625 15:58:38.841328   36162 system_pods.go:61] "kube-controller-manager-ha-674765-m03" [69ff2a00-e5ef-406d-aad3-aeb3fc0768b4] Running
	I0625 15:58:38.841333   36162 system_pods.go:61] "kube-proxy-lsmft" [fa5d210a-1295-497c-8a24-6a0f0dc941de] Running
	I0625 15:58:38.841338   36162 system_pods.go:61] "kube-proxy-rh9n5" [a0a24539-3168-42cc-93b3-d0b1e283d0bd] Running
	I0625 15:58:38.841347   36162 system_pods.go:61] "kube-proxy-swfsx" [d1d30f80-d2b4-4d24-8322-69850b1f882a] Running
	I0625 15:58:38.841353   36162 system_pods.go:61] "kube-scheduler-ha-674765" [2695280a-4dd5-4073-875e-63e5238bd1b7] Running
	I0625 15:58:38.841362   36162 system_pods.go:61] "kube-scheduler-ha-674765-m02" [dc04f489-1084-48d4-8cec-c79ec30e0987] Running
	I0625 15:58:38.841367   36162 system_pods.go:61] "kube-scheduler-ha-674765-m03" [231cafab-eb37-496f-aa2d-662d27d18ef0] Running
	I0625 15:58:38.841372   36162 system_pods.go:61] "kube-vip-ha-674765" [1d132475-65bb-43d1-9353-12b7be1f311f] Running
	I0625 15:58:38.841378   36162 system_pods.go:61] "kube-vip-ha-674765-m02" [dbde28c7-a109-4a7e-97bb-27576a94d2fe] Running
	I0625 15:58:38.841384   36162 system_pods.go:61] "kube-vip-ha-674765-m03" [08c72802-7f04-47c2-956a-8adc1a430e56] Running
	I0625 15:58:38.841390   36162 system_pods.go:61] "storage-provisioner" [c227c5cf-2bd6-4ebf-9fdb-09d4229cf421] Running
	I0625 15:58:38.841398   36162 system_pods.go:74] duration metric: took 183.259451ms to wait for pod list to return data ...
	I0625 15:58:38.841410   36162 default_sa.go:34] waiting for default service account to be created ...
	I0625 15:58:39.026820   36162 request.go:629] Waited for 185.339864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0625 15:58:39.026887   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0625 15:58:39.026892   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:39.026900   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:39.026904   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:39.032234   36162 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0625 15:58:39.032513   36162 default_sa.go:45] found service account: "default"
	I0625 15:58:39.032532   36162 default_sa.go:55] duration metric: took 191.115688ms for default service account to be created ...
	I0625 15:58:39.032544   36162 system_pods.go:116] waiting for k8s-apps to be running ...
	I0625 15:58:39.226977   36162 request.go:629] Waited for 194.363988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:58:39.227057   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:58:39.227067   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:39.227080   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:39.227086   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:39.236119   36162 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0625 15:58:39.242986   36162 system_pods.go:86] 24 kube-system pods found
	I0625 15:58:39.243010   36162 system_pods.go:89] "coredns-7db6d8ff4d-28db5" [1426e4a3-2f25-47e9-9b28-b23a81a3a19a] Running
	I0625 15:58:39.243018   36162 system_pods.go:89] "coredns-7db6d8ff4d-84zkt" [2f6426f8-a0c4-470c-b2b1-b62fa304c078] Running
	I0625 15:58:39.243025   36162 system_pods.go:89] "etcd-ha-674765" [a8f7d82c-8fc7-4190-99c2-0bedc24d8f4f] Running
	I0625 15:58:39.243031   36162 system_pods.go:89] "etcd-ha-674765-m02" [e3f94832-96fe-4bbf-8c53-86bab692b6a9] Running
	I0625 15:58:39.243043   36162 system_pods.go:89] "etcd-ha-674765-m03" [19a0a3e5-4f97-4ec1-9131-2cb687d36d77] Running
	I0625 15:58:39.243050   36162 system_pods.go:89] "kindnet-kkgdq" [cfb408ee-0f73-4537-87fb-fad3d2b1f3f1] Running
	I0625 15:58:39.243056   36162 system_pods.go:89] "kindnet-ntq77" [37736a9f-5b4c-421c-9027-81e961ab8550] Running
	I0625 15:58:39.243064   36162 system_pods.go:89] "kindnet-px4dn" [27ef663b-4867-4757-9e02-5086d4875471] Running
	I0625 15:58:39.243073   36162 system_pods.go:89] "kube-apiserver-ha-674765" [594e5a19-d80b-4b26-8c91-a8475fb99630] Running
	I0625 15:58:39.243080   36162 system_pods.go:89] "kube-apiserver-ha-674765-m02" [e00ad102-e252-49e9-82e4-b466ae4eb7b2] Running
	I0625 15:58:39.243091   36162 system_pods.go:89] "kube-apiserver-ha-674765-m03" [90f8d49f-694e-4872-9a70-c1211b79cefd] Running
	I0625 15:58:39.243101   36162 system_pods.go:89] "kube-controller-manager-ha-674765" [5f4f1e7d-f796-4762-9f33-61755c0daef3] Running
	I0625 15:58:39.243110   36162 system_pods.go:89] "kube-controller-manager-ha-674765-m02" [acb4b5ca-b29e-4866-be68-eb4c6425463d] Running
	I0625 15:58:39.243119   36162 system_pods.go:89] "kube-controller-manager-ha-674765-m03" [69ff2a00-e5ef-406d-aad3-aeb3fc0768b4] Running
	I0625 15:58:39.243128   36162 system_pods.go:89] "kube-proxy-lsmft" [fa5d210a-1295-497c-8a24-6a0f0dc941de] Running
	I0625 15:58:39.243134   36162 system_pods.go:89] "kube-proxy-rh9n5" [a0a24539-3168-42cc-93b3-d0b1e283d0bd] Running
	I0625 15:58:39.243140   36162 system_pods.go:89] "kube-proxy-swfsx" [d1d30f80-d2b4-4d24-8322-69850b1f882a] Running
	I0625 15:58:39.243146   36162 system_pods.go:89] "kube-scheduler-ha-674765" [2695280a-4dd5-4073-875e-63e5238bd1b7] Running
	I0625 15:58:39.243153   36162 system_pods.go:89] "kube-scheduler-ha-674765-m02" [dc04f489-1084-48d4-8cec-c79ec30e0987] Running
	I0625 15:58:39.243164   36162 system_pods.go:89] "kube-scheduler-ha-674765-m03" [231cafab-eb37-496f-aa2d-662d27d18ef0] Running
	I0625 15:58:39.243173   36162 system_pods.go:89] "kube-vip-ha-674765" [1d132475-65bb-43d1-9353-12b7be1f311f] Running
	I0625 15:58:39.243180   36162 system_pods.go:89] "kube-vip-ha-674765-m02" [dbde28c7-a109-4a7e-97bb-27576a94d2fe] Running
	I0625 15:58:39.243189   36162 system_pods.go:89] "kube-vip-ha-674765-m03" [08c72802-7f04-47c2-956a-8adc1a430e56] Running
	I0625 15:58:39.243195   36162 system_pods.go:89] "storage-provisioner" [c227c5cf-2bd6-4ebf-9fdb-09d4229cf421] Running
	I0625 15:58:39.243206   36162 system_pods.go:126] duration metric: took 210.656126ms to wait for k8s-apps to be running ...
	I0625 15:58:39.243220   36162 system_svc.go:44] waiting for kubelet service to be running ....
	I0625 15:58:39.243270   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 15:58:39.260565   36162 system_svc.go:56] duration metric: took 17.338537ms WaitForService to wait for kubelet
	I0625 15:58:39.260592   36162 kubeadm.go:576] duration metric: took 18.46091276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 15:58:39.260612   36162 node_conditions.go:102] verifying NodePressure condition ...
	I0625 15:58:39.426892   36162 request.go:629] Waited for 166.223413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes
	I0625 15:58:39.426957   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes
	I0625 15:58:39.426963   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:39.426975   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:39.426981   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:39.431813   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:39.432826   36162 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0625 15:58:39.432848   36162 node_conditions.go:123] node cpu capacity is 2
	I0625 15:58:39.432860   36162 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0625 15:58:39.432864   36162 node_conditions.go:123] node cpu capacity is 2
	I0625 15:58:39.432868   36162 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0625 15:58:39.432871   36162 node_conditions.go:123] node cpu capacity is 2
	I0625 15:58:39.432874   36162 node_conditions.go:105] duration metric: took 172.258695ms to run NodePressure ...
	I0625 15:58:39.432888   36162 start.go:240] waiting for startup goroutines ...
	I0625 15:58:39.432913   36162 start.go:254] writing updated cluster config ...
	I0625 15:58:39.433196   36162 ssh_runner.go:195] Run: rm -f paused
	I0625 15:58:39.484755   36162 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0625 15:58:39.486626   36162 out.go:177] * Done! kubectl is now configured to use "ha-674765" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.867010872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331328866984085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41adf288-dc95-46a9-8483-418721ee1d9f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.867383915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a837112-a6a3-439e-a924-c46fd098c5e2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.867456660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a837112-a6a3-439e-a924-c46fd098c5e2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.867676049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331123602460126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982140184531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982105059965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a641a439e9e1a4812e2d701e924065cf043e82fbeeb31138efc1da913f59e,PodSandboxId:8c17c2f81c12f58083ae9c6e26c825dc4701f9b68cbf01e1583716d703bc9269,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1719330981985152081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3562eeca26a1a1131c2f80e823f3f8779c3e235bf200331ce891f51b37df0c,PodSandboxId:1623f777feead6fabf15a4e29139791f4c38ed435a0c368cdb1dfebf1a45ec64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:171933098
0130797768,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719330979753070308,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ed8ce894547a7bc3deba857b5d7d733af8ba225cb579c469f090460bff27d3,PodSandboxId:ebc12e3f7a7ce5a2b5a6c7beddfe956ea5de58d27aa020dc6979043c872fc752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719330963710738330,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41be34697cf0082e06e8923557664cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719330960414652980,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9938e238e129cd0d797a5de776e0d7b756bc8f39188223f4151974b19fb7506c,PodSandboxId:e3236a96cfba0a3dd95041d0792f8fa934df06572898ae6514913c5050b5fe9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719330960405239254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a40f818bed683af529089283a92813b3d87d93d9cb9290b6081645f3bced82fa,PodSandboxId:fb68107e9ae65837eff4df8cb043150d9fab87c80158db9f2658dcb99e1ae72c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719330960357440842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719330960349402853,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a837112-a6a3-439e-a924-c46fd098c5e2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.905027705Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f0f80b2-099f-4ef6-bf70-d4f4a303813a name=/runtime.v1.RuntimeService/Version
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.905096245Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f0f80b2-099f-4ef6-bf70-d4f4a303813a name=/runtime.v1.RuntimeService/Version
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.906464850Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b68648d1-0426-4f55-81c1-e830f1fd3473 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.906950378Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331328906929223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b68648d1-0426-4f55-81c1-e830f1fd3473 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.907609409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7d37aaf-7e58-49a3-ab37-93a1d1770de1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.907789840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7d37aaf-7e58-49a3-ab37-93a1d1770de1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.908102250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331123602460126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982140184531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982105059965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a641a439e9e1a4812e2d701e924065cf043e82fbeeb31138efc1da913f59e,PodSandboxId:8c17c2f81c12f58083ae9c6e26c825dc4701f9b68cbf01e1583716d703bc9269,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1719330981985152081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3562eeca26a1a1131c2f80e823f3f8779c3e235bf200331ce891f51b37df0c,PodSandboxId:1623f777feead6fabf15a4e29139791f4c38ed435a0c368cdb1dfebf1a45ec64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:171933098
0130797768,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719330979753070308,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ed8ce894547a7bc3deba857b5d7d733af8ba225cb579c469f090460bff27d3,PodSandboxId:ebc12e3f7a7ce5a2b5a6c7beddfe956ea5de58d27aa020dc6979043c872fc752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719330963710738330,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41be34697cf0082e06e8923557664cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719330960414652980,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9938e238e129cd0d797a5de776e0d7b756bc8f39188223f4151974b19fb7506c,PodSandboxId:e3236a96cfba0a3dd95041d0792f8fa934df06572898ae6514913c5050b5fe9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719330960405239254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a40f818bed683af529089283a92813b3d87d93d9cb9290b6081645f3bced82fa,PodSandboxId:fb68107e9ae65837eff4df8cb043150d9fab87c80158db9f2658dcb99e1ae72c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719330960357440842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719330960349402853,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7d37aaf-7e58-49a3-ab37-93a1d1770de1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.953534745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0de5be8-01a7-46ed-96e1-141539a9e27f name=/runtime.v1.RuntimeService/Version
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.953606798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0de5be8-01a7-46ed-96e1-141539a9e27f name=/runtime.v1.RuntimeService/Version
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.955237980Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b40ca49-9e3c-49c8-b6bd-e987d2f79cbf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.955834478Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331328955647792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b40ca49-9e3c-49c8-b6bd-e987d2f79cbf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.956630756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=337f8c85-9a5e-411e-89fc-a1d38065cce0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.956685874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=337f8c85-9a5e-411e-89fc-a1d38065cce0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.956959212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331123602460126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982140184531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982105059965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a641a439e9e1a4812e2d701e924065cf043e82fbeeb31138efc1da913f59e,PodSandboxId:8c17c2f81c12f58083ae9c6e26c825dc4701f9b68cbf01e1583716d703bc9269,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1719330981985152081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3562eeca26a1a1131c2f80e823f3f8779c3e235bf200331ce891f51b37df0c,PodSandboxId:1623f777feead6fabf15a4e29139791f4c38ed435a0c368cdb1dfebf1a45ec64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:171933098
0130797768,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719330979753070308,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ed8ce894547a7bc3deba857b5d7d733af8ba225cb579c469f090460bff27d3,PodSandboxId:ebc12e3f7a7ce5a2b5a6c7beddfe956ea5de58d27aa020dc6979043c872fc752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719330963710738330,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41be34697cf0082e06e8923557664cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719330960414652980,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9938e238e129cd0d797a5de776e0d7b756bc8f39188223f4151974b19fb7506c,PodSandboxId:e3236a96cfba0a3dd95041d0792f8fa934df06572898ae6514913c5050b5fe9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719330960405239254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a40f818bed683af529089283a92813b3d87d93d9cb9290b6081645f3bced82fa,PodSandboxId:fb68107e9ae65837eff4df8cb043150d9fab87c80158db9f2658dcb99e1ae72c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719330960357440842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719330960349402853,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=337f8c85-9a5e-411e-89fc-a1d38065cce0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.997206468Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e54c107-1ad3-4af1-a795-2c0703b1668f name=/runtime.v1.RuntimeService/Version
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.997295105Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e54c107-1ad3-4af1-a795-2c0703b1668f name=/runtime.v1.RuntimeService/Version
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.998482062Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15f3f968-244c-454d-965e-984440e1ca6c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.998972716Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331328998944390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15f3f968-244c-454d-965e-984440e1ca6c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.999365262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c17a41f6-37b7-4046-a63f-61b4e33d7657 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.999437110Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c17a41f6-37b7-4046-a63f-61b4e33d7657 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:02:08 ha-674765 crio[684]: time="2024-06-25 16:02:08.999686089Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331123602460126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982140184531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982105059965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a641a439e9e1a4812e2d701e924065cf043e82fbeeb31138efc1da913f59e,PodSandboxId:8c17c2f81c12f58083ae9c6e26c825dc4701f9b68cbf01e1583716d703bc9269,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1719330981985152081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3562eeca26a1a1131c2f80e823f3f8779c3e235bf200331ce891f51b37df0c,PodSandboxId:1623f777feead6fabf15a4e29139791f4c38ed435a0c368cdb1dfebf1a45ec64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:171933098
0130797768,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719330979753070308,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ed8ce894547a7bc3deba857b5d7d733af8ba225cb579c469f090460bff27d3,PodSandboxId:ebc12e3f7a7ce5a2b5a6c7beddfe956ea5de58d27aa020dc6979043c872fc752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719330963710738330,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41be34697cf0082e06e8923557664cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719330960414652980,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9938e238e129cd0d797a5de776e0d7b756bc8f39188223f4151974b19fb7506c,PodSandboxId:e3236a96cfba0a3dd95041d0792f8fa934df06572898ae6514913c5050b5fe9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719330960405239254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a40f818bed683af529089283a92813b3d87d93d9cb9290b6081645f3bced82fa,PodSandboxId:fb68107e9ae65837eff4df8cb043150d9fab87c80158db9f2658dcb99e1ae72c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719330960357440842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719330960349402853,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c17a41f6-37b7-4046-a63f-61b4e33d7657 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dd7837c56cda3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   d18f421cdb437       busybox-fc5497c4f-qjw4r
	ec00b1016861e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   2249d5de30294       coredns-7db6d8ff4d-84zkt
	5dff3834f63a3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   36a6cd372769c       coredns-7db6d8ff4d-28db5
	6e1a641a439e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   8c17c2f81c12f       storage-provisioner
	ff3562eeca26a       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      5 minutes ago       Running             kindnet-cni               0                   1623f777feead       kindnet-ntq77
	7cea2f95fa7a7       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      5 minutes ago       Running             kube-proxy                0                   41bb01e505abe       kube-proxy-rh9n5
	c3ed8ce894547       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   ebc12e3f7a7ce       kube-vip-ha-674765
	a7ed432b8fb61       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      6 minutes ago       Running             kube-scheduler            0                   3498fabc6b53a       kube-scheduler-ha-674765
	9938e238e129c       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      6 minutes ago       Running             kube-controller-manager   0                   e3236a96cfba0       kube-controller-manager-ha-674765
	a40f818bed683       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      6 minutes ago       Running             kube-apiserver            0                   fb68107e9ae65       kube-apiserver-ha-674765
	e903f61a215f1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   4695ac9edbc50       etcd-ha-674765
	
	
	==> coredns [5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8] <==
	[INFO] 10.244.1.2:37149 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135115s
	[INFO] 10.244.1.2:55180 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000186786s
	[INFO] 10.244.0.4:51274 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116315s
	[INFO] 10.244.0.4:58927 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00175173s
	[INFO] 10.244.0.4:58086 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147783s
	[INFO] 10.244.0.4:40292 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069393s
	[INFO] 10.244.0.4:47923 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008723s
	[INFO] 10.244.2.2:43607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173082s
	[INFO] 10.244.2.2:58140 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152475s
	[INFO] 10.244.2.2:58321 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00137128s
	[INFO] 10.244.2.2:51827 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149446s
	[INFO] 10.244.1.2:53516 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091184s
	[INFO] 10.244.1.2:50837 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111518s
	[INFO] 10.244.0.4:36638 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096918s
	[INFO] 10.244.0.4:34420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062938s
	[INFO] 10.244.2.2:47727 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109009s
	[INFO] 10.244.2.2:53547 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114146s
	[INFO] 10.244.2.2:52427 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103325s
	[INFO] 10.244.0.4:35396 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015274s
	[INFO] 10.244.0.4:37070 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000162346s
	[INFO] 10.244.0.4:34499 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000181932s
	[INFO] 10.244.2.2:39406 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141568s
	[INFO] 10.244.2.2:45012 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125003s
	[INFO] 10.244.2.2:37480 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111741s
	[INFO] 10.244.2.2:38163 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160497s
	
	
	==> coredns [ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b] <==
	[INFO] 10.244.1.2:59350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001047557s
	[INFO] 10.244.0.4:38331 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001513057s
	[INFO] 10.244.2.2:38263 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00022826s
	[INFO] 10.244.1.2:37269 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228534s
	[INFO] 10.244.1.2:37116 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000238078s
	[INFO] 10.244.1.2:57875 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221098s
	[INFO] 10.244.1.2:50144 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003188543s
	[INFO] 10.244.1.2:52779 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142142s
	[INFO] 10.244.0.4:54632 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118741s
	[INFO] 10.244.0.4:42979 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001269082s
	[INFO] 10.244.0.4:36713 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084451s
	[INFO] 10.244.2.2:41583 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001597985s
	[INFO] 10.244.2.2:38518 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007901s
	[INFO] 10.244.2.2:36859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163343s
	[INFO] 10.244.2.2:48049 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012051s
	[INFO] 10.244.1.2:41596 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099989s
	[INFO] 10.244.1.2:53657 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152026s
	[INFO] 10.244.0.4:37328 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010546s
	[INFO] 10.244.0.4:37107 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078111s
	[INFO] 10.244.2.2:58260 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109644s
	[INFO] 10.244.1.2:51838 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161138s
	[INFO] 10.244.1.2:34544 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000245952s
	[INFO] 10.244.1.2:41848 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133045s
	[INFO] 10.244.1.2:55838 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000180767s
	[INFO] 10.244.0.4:56384 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068132s
	
	
	==> describe nodes <==
	Name:               ha-674765
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_25T15_56_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:56:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:02:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 15:59:10 +0000   Tue, 25 Jun 2024 15:56:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 15:59:10 +0000   Tue, 25 Jun 2024 15:56:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 15:59:10 +0000   Tue, 25 Jun 2024 15:56:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 15:59:10 +0000   Tue, 25 Jun 2024 15:56:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-674765
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9f74a4b042742c8a0ef29e697c6459c
	  System UUID:                b9f74a4b-0427-42c8-a0ef-29e697c6459c
	  Boot ID:                    52ea2189-696e-4985-bf6b-90448e3e85aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qjw4r              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 coredns-7db6d8ff4d-28db5             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m50s
	  kube-system                 coredns-7db6d8ff4d-84zkt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m50s
	  kube-system                 etcd-ha-674765                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m3s
	  kube-system                 kindnet-ntq77                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m50s
	  kube-system                 kube-apiserver-ha-674765             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-controller-manager-ha-674765    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-proxy-rh9n5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-scheduler-ha-674765             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-vip-ha-674765                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m49s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m10s (x7 over 6m10s)  kubelet          Node ha-674765 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m10s (x8 over 6m10s)  kubelet          Node ha-674765 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s (x8 over 6m10s)  kubelet          Node ha-674765 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m3s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m3s                   kubelet          Node ha-674765 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s                   kubelet          Node ha-674765 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s                   kubelet          Node ha-674765 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m51s                  node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Normal  NodeReady                5m48s                  kubelet          Node ha-674765 status is now: NodeReady
	  Normal  RegisteredNode           4m42s                  node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Normal  RegisteredNode           3m34s                  node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	
	
	Name:               ha-674765-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_25T15_57_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:57:09 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 15:59:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 25 Jun 2024 15:59:12 +0000   Tue, 25 Jun 2024 16:00:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 25 Jun 2024 15:59:12 +0000   Tue, 25 Jun 2024 16:00:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 25 Jun 2024 15:59:12 +0000   Tue, 25 Jun 2024 16:00:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 25 Jun 2024 15:59:12 +0000   Tue, 25 Jun 2024 16:00:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-674765-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 45ee8176fa3149fdb7e4bac2256c26b7
	  System UUID:                45ee8176-fa31-49fd-b7e4-bac2256c26b7
	  Boot ID:                    3d0db961-cfa5-4af0-9483-cceea6d2d005
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jx6j4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 etcd-ha-674765-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m58s
	  kube-system                 kindnet-kkgdq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m
	  kube-system                 kube-apiserver-ha-674765-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-controller-manager-ha-674765-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-proxy-lsmft                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-scheduler-ha-674765-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-vip-ha-674765-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 4m55s            kube-proxy       
	  Normal  NodeHasSufficientMemory  5m (x8 over 5m)  kubelet          Node ha-674765-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m (x8 over 5m)  kubelet          Node ha-674765-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m (x7 over 5m)  kubelet          Node ha-674765-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m56s            node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  RegisteredNode           4m42s            node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  RegisteredNode           3m34s            node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  NodeNotReady             104s             node-controller  Node ha-674765-m02 status is now: NodeNotReady
	
	
	Name:               ha-674765-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_25T15_58_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:58:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:02:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 15:58:48 +0000   Tue, 25 Jun 2024 15:58:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 15:58:48 +0000   Tue, 25 Jun 2024 15:58:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 15:58:48 +0000   Tue, 25 Jun 2024 15:58:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 15:58:48 +0000   Tue, 25 Jun 2024 15:58:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.77
	  Hostname:    ha-674765-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 82d78f3bf896447aa83d147c6be1d104
	  System UUID:                82d78f3b-f896-447a-a83d-147c6be1d104
	  Boot ID:                    9e6335e7-1ac0-4745-936d-85efc228a44f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vn65x                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 etcd-ha-674765-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m50s
	  kube-system                 kindnet-px4dn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m52s
	  kube-system                 kube-apiserver-ha-674765-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-controller-manager-ha-674765-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-proxy-swfsx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-scheduler-ha-674765-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-vip-ha-674765-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node ha-674765-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node ha-674765-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x7 over 3m52s)  kubelet          Node ha-674765-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-674765-m03 event: Registered Node ha-674765-m03 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-674765-m03 event: Registered Node ha-674765-m03 in Controller
	  Normal  RegisteredNode           3m34s                  node-controller  Node ha-674765-m03 event: Registered Node ha-674765-m03 in Controller
	
	
	Name:               ha-674765-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_25T15_59_18_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:59:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:02:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 15:59:48 +0000   Tue, 25 Jun 2024 15:59:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 15:59:48 +0000   Tue, 25 Jun 2024 15:59:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 15:59:48 +0000   Tue, 25 Jun 2024 15:59:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 15:59:48 +0000   Tue, 25 Jun 2024 15:59:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-674765-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 153487087a1a4805965ecc96230ab164
	  System UUID:                15348708-7a1a-4805-965e-cc96230ab164
	  Boot ID:                    6eb50b4e-74eb-4263-80f2-15c137071776
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6z24k       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m52s
	  kube-system                 kube-proxy-szzwh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m52s (x2 over 2m52s)  kubelet          Node ha-674765-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x2 over 2m52s)  kubelet          Node ha-674765-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x2 over 2m52s)  kubelet          Node ha-674765-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-674765-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun25 15:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051304] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040153] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.502989] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.368528] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.612454] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.515677] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.054245] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062657] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.163326] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.122319] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.250574] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.069829] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +3.840914] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.060181] kauditd_printk_skb: 158 callbacks suppressed
	[Jun25 15:56] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +0.085967] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.321595] kauditd_printk_skb: 21 callbacks suppressed
	[Jun25 15:57] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32] <==
	{"level":"warn","ts":"2024-06-25T16:02:09.273055Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.282478Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.288356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.300338Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.310813Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.322705Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.328494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.331575Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.339457Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.339848Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.347146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.353376Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.356258Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.35932Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.36628Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.37207Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.38079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.38461Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.388243Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.396921Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.403231Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.405382Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.53:2380/version","remote-member-id":"ce369a7c509ac3e5","error":"Get \"https://192.168.39.53:2380/version\": dial tcp 192.168.39.53:2380: i/o timeout"}
	{"level":"warn","ts":"2024-06-25T16:02:09.405438Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ce369a7c509ac3e5","error":"Get \"https://192.168.39.53:2380/version\": dial tcp 192.168.39.53:2380: i/o timeout"}
	{"level":"warn","ts":"2024-06-25T16:02:09.409258Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:02:09.439671Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 16:02:09 up 6 min,  0 users,  load average: 0.06, 0.12, 0.07
	Linux ha-674765 5.10.207 #1 SMP Mon Jun 24 21:03:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ff3562eeca26a1a1131c2f80e823f3f8779c3e235bf200331ce891f51b37df0c] <==
	I0625 16:01:31.361445       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	I0625 16:01:41.374838       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0625 16:01:41.375098       1 main.go:227] handling current node
	I0625 16:01:41.375180       1 main.go:223] Handling node with IPs: map[192.168.39.53:{}]
	I0625 16:01:41.375256       1 main.go:250] Node ha-674765-m02 has CIDR [10.244.1.0/24] 
	I0625 16:01:41.375497       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I0625 16:01:41.375591       1 main.go:250] Node ha-674765-m03 has CIDR [10.244.2.0/24] 
	I0625 16:01:41.375751       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0625 16:01:41.375824       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	I0625 16:01:51.382444       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0625 16:01:51.382578       1 main.go:227] handling current node
	I0625 16:01:51.382622       1 main.go:223] Handling node with IPs: map[192.168.39.53:{}]
	I0625 16:01:51.382650       1 main.go:250] Node ha-674765-m02 has CIDR [10.244.1.0/24] 
	I0625 16:01:51.382828       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I0625 16:01:51.382951       1 main.go:250] Node ha-674765-m03 has CIDR [10.244.2.0/24] 
	I0625 16:01:51.383109       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0625 16:01:51.383193       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	I0625 16:02:01.399109       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0625 16:02:01.399183       1 main.go:227] handling current node
	I0625 16:02:01.399206       1 main.go:223] Handling node with IPs: map[192.168.39.53:{}]
	I0625 16:02:01.399222       1 main.go:250] Node ha-674765-m02 has CIDR [10.244.1.0/24] 
	I0625 16:02:01.399349       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I0625 16:02:01.399373       1 main.go:250] Node ha-674765-m03 has CIDR [10.244.2.0/24] 
	I0625 16:02:01.399430       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0625 16:02:01.399447       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a40f818bed683af529089283a92813b3d87d93d9cb9290b6081645f3bced82fa] <==
	I0625 15:56:04.930005       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0625 15:56:05.151090       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0625 15:56:06.646123       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0625 15:56:06.673651       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0625 15:56:06.684625       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0625 15:56:19.306091       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0625 15:56:19.358752       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0625 15:58:18.307766       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0625 15:58:18.307842       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0625 15:58:18.308003       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 5.3µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0625 15:58:18.309224       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0625 15:58:18.309349       1 timeout.go:142] post-timeout activity - time-elapsed: 1.703289ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0625 15:58:45.099490       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50334: use of closed network connection
	E0625 15:58:45.278184       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50354: use of closed network connection
	E0625 15:58:45.463496       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50368: use of closed network connection
	E0625 15:58:45.673613       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50384: use of closed network connection
	E0625 15:58:45.856439       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50396: use of closed network connection
	E0625 15:58:46.041584       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50416: use of closed network connection
	E0625 15:58:46.218419       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50424: use of closed network connection
	E0625 15:58:46.559429       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50442: use of closed network connection
	E0625 15:58:46.844474       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50462: use of closed network connection
	E0625 15:58:47.016062       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50488: use of closed network connection
	E0625 15:58:47.208625       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50508: use of closed network connection
	E0625 15:58:47.392225       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50522: use of closed network connection
	E0625 15:58:47.579536       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50530: use of closed network connection
	
	
	==> kube-controller-manager [9938e238e129cd0d797a5de776e0d7b756bc8f39188223f4151974b19fb7506c] <==
	I0625 15:58:18.552671       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-674765-m03"
	I0625 15:58:40.402823       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.498371ms"
	I0625 15:58:40.450576       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.600006ms"
	I0625 15:58:40.450701       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.696µs"
	I0625 15:58:40.629374       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="176.065135ms"
	I0625 15:58:40.696690       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.239142ms"
	I0625 15:58:40.752178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.389917ms"
	I0625 15:58:40.752293       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.326µs"
	I0625 15:58:40.822785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.424524ms"
	I0625 15:58:40.823068       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.258µs"
	I0625 15:58:40.921183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.309693ms"
	I0625 15:58:40.921346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.638µs"
	I0625 15:58:44.287931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.53826ms"
	I0625 15:58:44.298399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.405907ms"
	I0625 15:58:44.298630       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.473µs"
	I0625 15:58:44.634066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.779607ms"
	I0625 15:58:44.634342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.308µs"
	I0625 15:59:17.920783       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-674765-m04\" does not exist"
	E0625 15:59:17.923263       1 certificate_controller.go:146] Sync csr-d2mwl failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-d2mwl": the object has been modified; please apply your changes to the latest version and try again
	I0625 15:59:17.955139       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-674765-m04" podCIDRs=["10.244.3.0/24"]
	I0625 15:59:18.578564       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-674765-m04"
	I0625 15:59:28.607395       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-674765-m04"
	I0625 16:00:25.912792       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-674765-m04"
	I0625 16:00:26.009569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.392548ms"
	I0625 16:00:26.009659       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.249µs"
	
	
	==> kube-proxy [7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c] <==
	I0625 15:56:19.915548       1 server_linux.go:69] "Using iptables proxy"
	I0625 15:56:19.937492       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.128"]
	I0625 15:56:19.974432       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0625 15:56:19.974479       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0625 15:56:19.974492       1 server_linux.go:165] "Using iptables Proxier"
	I0625 15:56:19.977183       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0625 15:56:19.977364       1 server.go:872] "Version info" version="v1.30.2"
	I0625 15:56:19.977392       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 15:56:19.978794       1 config.go:192] "Starting service config controller"
	I0625 15:56:19.978825       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0625 15:56:19.978847       1 config.go:101] "Starting endpoint slice config controller"
	I0625 15:56:19.978851       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0625 15:56:19.979407       1 config.go:319] "Starting node config controller"
	I0625 15:56:19.979431       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0625 15:56:20.079672       1 shared_informer.go:320] Caches are synced for node config
	I0625 15:56:20.079718       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0625 15:56:20.079734       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65] <==
	E0625 15:56:04.238145       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0625 15:56:04.238102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0625 15:56:04.238183       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0625 15:56:04.303707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0625 15:56:04.303821       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0625 15:56:04.327725       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0625 15:56:04.327837       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0625 15:56:04.409284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0625 15:56:04.409328       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0625 15:56:04.438451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0625 15:56:04.438497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0625 15:56:06.878478       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0625 15:58:40.377246       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-vn65x\": pod busybox-fc5497c4f-vn65x is already assigned to node \"ha-674765-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-vn65x" node="ha-674765-m03"
	E0625 15:58:40.377969       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-vn65x\": pod busybox-fc5497c4f-vn65x is already assigned to node \"ha-674765-m03\"" pod="default/busybox-fc5497c4f-vn65x"
	I0625 15:58:40.378142       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-vn65x" node="ha-674765-m03"
	E0625 15:59:18.009813       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6z24k\": pod kindnet-6z24k is already assigned to node \"ha-674765-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-6z24k" node="ha-674765-m04"
	E0625 15:59:18.009977       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6z24k\": pod kindnet-6z24k is already assigned to node \"ha-674765-m04\"" pod="kube-system/kindnet-6z24k"
	E0625 15:59:18.010569       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-szzwh\": pod kube-proxy-szzwh is already assigned to node \"ha-674765-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-szzwh" node="ha-674765-m04"
	E0625 15:59:18.010649       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 825f1e68-aec0-44cf-9817-b248a6078673(kube-system/kube-proxy-szzwh) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-szzwh"
	E0625 15:59:18.010677       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-szzwh\": pod kube-proxy-szzwh is already assigned to node \"ha-674765-m04\"" pod="kube-system/kube-proxy-szzwh"
	I0625 15:59:18.010702       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-szzwh" node="ha-674765-m04"
	E0625 15:59:18.040032       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-g48pp\": pod kube-proxy-g48pp is already assigned to node \"ha-674765-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-g48pp" node="ha-674765-m04"
	E0625 15:59:18.040108       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ab1311ca-030d-4407-87ba-2ff9c8b8feed(kube-system/kube-proxy-g48pp) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-g48pp"
	E0625 15:59:18.040132       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-g48pp\": pod kube-proxy-g48pp is already assigned to node \"ha-674765-m04\"" pod="kube-system/kube-proxy-g48pp"
	I0625 15:59:18.040154       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-g48pp" node="ha-674765-m04"
	
	
	==> kubelet <==
	Jun 25 15:58:06 ha-674765 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 15:58:06 ha-674765 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 15:58:06 ha-674765 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 25 15:58:40 ha-674765 kubelet[1375]: I0625 15:58:40.412470    1375 topology_manager.go:215] "Topology Admit Handler" podUID="49031b4f-d04c-44bf-9725-094e7df6945c" podNamespace="default" podName="busybox-fc5497c4f-qjw4r"
	Jun 25 15:58:40 ha-674765 kubelet[1375]: I0625 15:58:40.543562    1375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvbww\" (UniqueName: \"kubernetes.io/projected/49031b4f-d04c-44bf-9725-094e7df6945c-kube-api-access-hvbww\") pod \"busybox-fc5497c4f-qjw4r\" (UID: \"49031b4f-d04c-44bf-9725-094e7df6945c\") " pod="default/busybox-fc5497c4f-qjw4r"
	Jun 25 15:59:06 ha-674765 kubelet[1375]: E0625 15:59:06.604778    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 15:59:06 ha-674765 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 15:59:06 ha-674765 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 15:59:06 ha-674765 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 15:59:06 ha-674765 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 25 16:00:06 ha-674765 kubelet[1375]: E0625 16:00:06.603591    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 16:00:06 ha-674765 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:00:06 ha-674765 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:00:06 ha-674765 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:00:06 ha-674765 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 25 16:01:06 ha-674765 kubelet[1375]: E0625 16:01:06.605369    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 16:01:06 ha-674765 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:01:06 ha-674765 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:01:06 ha-674765 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:01:06 ha-674765 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 25 16:02:06 ha-674765 kubelet[1375]: E0625 16:02:06.604802    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 16:02:06 ha-674765 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:02:06 ha-674765 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:02:06 ha-674765 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:02:06 ha-674765 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-674765 -n ha-674765
helpers_test.go:261: (dbg) Run:  kubectl --context ha-674765 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (53.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr: exit status 3 (3.202072153s)

                                                
                                                
-- stdout --
	ha-674765
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-674765-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:02:13.907143   40911 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:02:13.907233   40911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:02:13.907241   40911 out.go:304] Setting ErrFile to fd 2...
	I0625 16:02:13.907245   40911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:02:13.907414   40911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:02:13.907562   40911 out.go:298] Setting JSON to false
	I0625 16:02:13.907582   40911 mustload.go:65] Loading cluster: ha-674765
	I0625 16:02:13.907689   40911 notify.go:220] Checking for updates...
	I0625 16:02:13.908015   40911 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:02:13.908036   40911 status.go:255] checking status of ha-674765 ...
	I0625 16:02:13.908389   40911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:13.908432   40911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:13.928232   40911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I0625 16:02:13.928625   40911 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:13.929216   40911 main.go:141] libmachine: Using API Version  1
	I0625 16:02:13.929249   40911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:13.929554   40911 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:13.929724   40911 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 16:02:13.931021   40911 status.go:330] ha-674765 host status = "Running" (err=<nil>)
	I0625 16:02:13.931038   40911 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:02:13.931285   40911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:13.931318   40911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:13.945283   40911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
	I0625 16:02:13.945664   40911 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:13.946010   40911 main.go:141] libmachine: Using API Version  1
	I0625 16:02:13.946050   40911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:13.946288   40911 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:13.946490   40911 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:02:13.949178   40911 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:13.949558   40911 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:02:13.949575   40911 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:13.949712   40911 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:02:13.950081   40911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:13.950128   40911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:13.963789   40911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36097
	I0625 16:02:13.964213   40911 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:13.964677   40911 main.go:141] libmachine: Using API Version  1
	I0625 16:02:13.964697   40911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:13.964995   40911 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:13.965162   40911 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:02:13.965359   40911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:13.965394   40911 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:02:13.967942   40911 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:13.968354   40911 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:02:13.968378   40911 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:13.968531   40911 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:02:13.968708   40911 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:02:13.968855   40911 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:02:13.968963   40911 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:02:14.049934   40911 ssh_runner.go:195] Run: systemctl --version
	I0625 16:02:14.055914   40911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:14.072325   40911 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:14.072348   40911 api_server.go:166] Checking apiserver status ...
	I0625 16:02:14.072380   40911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:14.087062   40911 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup
	W0625 16:02:14.099854   40911 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:14.099897   40911 ssh_runner.go:195] Run: ls
	I0625 16:02:14.104318   40911 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:14.108525   40911 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:14.108543   40911 status.go:422] ha-674765 apiserver status = Running (err=<nil>)
	I0625 16:02:14.108551   40911 status.go:257] ha-674765 status: &{Name:ha-674765 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:14.108573   40911 status.go:255] checking status of ha-674765-m02 ...
	I0625 16:02:14.108827   40911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:14.108859   40911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:14.125428   40911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0625 16:02:14.125903   40911 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:14.126358   40911 main.go:141] libmachine: Using API Version  1
	I0625 16:02:14.126382   40911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:14.126758   40911 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:14.126979   40911 main.go:141] libmachine: (ha-674765-m02) Calling .GetState
	I0625 16:02:14.128672   40911 status.go:330] ha-674765-m02 host status = "Running" (err=<nil>)
	I0625 16:02:14.128698   40911 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:02:14.128985   40911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:14.129041   40911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:14.143368   40911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45631
	I0625 16:02:14.143755   40911 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:14.144165   40911 main.go:141] libmachine: Using API Version  1
	I0625 16:02:14.144188   40911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:14.144485   40911 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:14.144638   40911 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 16:02:14.147379   40911 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:14.147734   40911 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:02:14.147761   40911 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:14.147832   40911 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:02:14.148108   40911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:14.148141   40911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:14.162495   40911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34347
	I0625 16:02:14.162846   40911 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:14.163310   40911 main.go:141] libmachine: Using API Version  1
	I0625 16:02:14.163338   40911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:14.163659   40911 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:14.163845   40911 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 16:02:14.164041   40911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:14.164064   40911 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 16:02:14.166864   40911 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:14.167349   40911 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:02:14.167395   40911 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:14.167522   40911 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 16:02:14.167674   40911 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 16:02:14.167820   40911 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 16:02:14.167947   40911 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	W0625 16:02:16.726716   40911 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.53:22: connect: no route to host
	W0625 16:02:16.726805   40911 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	E0625 16:02:16.726821   40911 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:16.726829   40911 status.go:257] ha-674765-m02 status: &{Name:ha-674765-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0625 16:02:16.726845   40911 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:16.726852   40911 status.go:255] checking status of ha-674765-m03 ...
	I0625 16:02:16.727154   40911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:16.727190   40911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:16.742060   40911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I0625 16:02:16.742445   40911 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:16.742928   40911 main.go:141] libmachine: Using API Version  1
	I0625 16:02:16.742950   40911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:16.743323   40911 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:16.743550   40911 main.go:141] libmachine: (ha-674765-m03) Calling .GetState
	I0625 16:02:16.745287   40911 status.go:330] ha-674765-m03 host status = "Running" (err=<nil>)
	I0625 16:02:16.745305   40911 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:16.745573   40911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:16.745613   40911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:16.759624   40911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37091
	I0625 16:02:16.759933   40911 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:16.760351   40911 main.go:141] libmachine: Using API Version  1
	I0625 16:02:16.760367   40911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:16.760677   40911 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:16.760848   40911 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 16:02:16.763245   40911 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:16.763703   40911 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:16.763738   40911 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:16.764050   40911 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:16.764343   40911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:16.764390   40911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:16.778021   40911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34965
	I0625 16:02:16.778417   40911 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:16.778803   40911 main.go:141] libmachine: Using API Version  1
	I0625 16:02:16.778820   40911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:16.779118   40911 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:16.779273   40911 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 16:02:16.779443   40911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:16.779461   40911 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 16:02:16.781849   40911 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:16.782221   40911 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:16.782251   40911 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:16.782400   40911 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 16:02:16.782582   40911 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 16:02:16.782739   40911 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 16:02:16.782860   40911 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 16:02:16.866034   40911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:16.881590   40911 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:16.881613   40911 api_server.go:166] Checking apiserver status ...
	I0625 16:02:16.881646   40911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:16.897732   40911 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup
	W0625 16:02:16.908695   40911 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:16.908747   40911 ssh_runner.go:195] Run: ls
	I0625 16:02:16.913047   40911 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:16.916952   40911 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:16.916969   40911 status.go:422] ha-674765-m03 apiserver status = Running (err=<nil>)
	I0625 16:02:16.916985   40911 status.go:257] ha-674765-m03 status: &{Name:ha-674765-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:16.916998   40911 status.go:255] checking status of ha-674765-m04 ...
	I0625 16:02:16.917324   40911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:16.917368   40911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:16.932579   40911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36065
	I0625 16:02:16.932947   40911 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:16.933416   40911 main.go:141] libmachine: Using API Version  1
	I0625 16:02:16.933441   40911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:16.933710   40911 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:16.933895   40911 main.go:141] libmachine: (ha-674765-m04) Calling .GetState
	I0625 16:02:16.935468   40911 status.go:330] ha-674765-m04 host status = "Running" (err=<nil>)
	I0625 16:02:16.935491   40911 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:16.935750   40911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:16.935787   40911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:16.949385   40911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42939
	I0625 16:02:16.949751   40911 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:16.950150   40911 main.go:141] libmachine: Using API Version  1
	I0625 16:02:16.950180   40911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:16.950507   40911 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:16.950716   40911 main.go:141] libmachine: (ha-674765-m04) Calling .GetIP
	I0625 16:02:16.953323   40911 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:16.953791   40911 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:16.953819   40911 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:16.953951   40911 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:16.954277   40911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:16.954321   40911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:16.968769   40911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32811
	I0625 16:02:16.969095   40911 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:16.969487   40911 main.go:141] libmachine: Using API Version  1
	I0625 16:02:16.969504   40911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:16.969783   40911 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:16.969952   40911 main.go:141] libmachine: (ha-674765-m04) Calling .DriverName
	I0625 16:02:16.970103   40911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:16.970124   40911 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHHostname
	I0625 16:02:16.972643   40911 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:16.973052   40911 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:16.973078   40911 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:16.973204   40911 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHPort
	I0625 16:02:16.973366   40911 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHKeyPath
	I0625 16:02:16.973507   40911 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHUsername
	I0625 16:02:16.973649   40911 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m04/id_rsa Username:docker}
	I0625 16:02:17.053952   40911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:17.068588   40911 status.go:257] ha-674765-m04 status: &{Name:ha-674765-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr: exit status 3 (5.180342879s)

                                                
                                                
-- stdout --
	ha-674765
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-674765-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:02:18.074312   41012 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:02:18.074557   41012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:02:18.074566   41012 out.go:304] Setting ErrFile to fd 2...
	I0625 16:02:18.074571   41012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:02:18.074727   41012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:02:18.074871   41012 out.go:298] Setting JSON to false
	I0625 16:02:18.074892   41012 mustload.go:65] Loading cluster: ha-674765
	I0625 16:02:18.075016   41012 notify.go:220] Checking for updates...
	I0625 16:02:18.075214   41012 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:02:18.075227   41012 status.go:255] checking status of ha-674765 ...
	I0625 16:02:18.075572   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:18.075622   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:18.094885   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33349
	I0625 16:02:18.095235   41012 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:18.095827   41012 main.go:141] libmachine: Using API Version  1
	I0625 16:02:18.095855   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:18.096249   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:18.096409   41012 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 16:02:18.097855   41012 status.go:330] ha-674765 host status = "Running" (err=<nil>)
	I0625 16:02:18.097882   41012 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:02:18.098168   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:18.098230   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:18.112351   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0625 16:02:18.112704   41012 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:18.113134   41012 main.go:141] libmachine: Using API Version  1
	I0625 16:02:18.113154   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:18.113445   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:18.113625   41012 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:02:18.116138   41012 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:18.116506   41012 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:02:18.116524   41012 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:18.116698   41012 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:02:18.116951   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:18.116985   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:18.130928   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44551
	I0625 16:02:18.131240   41012 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:18.131687   41012 main.go:141] libmachine: Using API Version  1
	I0625 16:02:18.131711   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:18.132014   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:18.132193   41012 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:02:18.132380   41012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:18.132412   41012 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:02:18.135017   41012 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:18.135421   41012 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:02:18.135440   41012 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:18.135598   41012 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:02:18.135758   41012 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:02:18.135893   41012 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:02:18.136005   41012 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:02:18.218937   41012 ssh_runner.go:195] Run: systemctl --version
	I0625 16:02:18.225255   41012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:18.240759   41012 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:18.240784   41012 api_server.go:166] Checking apiserver status ...
	I0625 16:02:18.240811   41012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:18.255937   41012 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup
	W0625 16:02:18.266305   41012 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:18.266357   41012 ssh_runner.go:195] Run: ls
	I0625 16:02:18.270825   41012 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:18.277081   41012 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:18.277101   41012 status.go:422] ha-674765 apiserver status = Running (err=<nil>)
	I0625 16:02:18.277110   41012 status.go:257] ha-674765 status: &{Name:ha-674765 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:18.277128   41012 status.go:255] checking status of ha-674765-m02 ...
	I0625 16:02:18.277476   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:18.277521   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:18.291844   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0625 16:02:18.292203   41012 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:18.292640   41012 main.go:141] libmachine: Using API Version  1
	I0625 16:02:18.292656   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:18.292959   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:18.293136   41012 main.go:141] libmachine: (ha-674765-m02) Calling .GetState
	I0625 16:02:18.294677   41012 status.go:330] ha-674765-m02 host status = "Running" (err=<nil>)
	I0625 16:02:18.294692   41012 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:02:18.294993   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:18.295026   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:18.310629   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40717
	I0625 16:02:18.310924   41012 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:18.311366   41012 main.go:141] libmachine: Using API Version  1
	I0625 16:02:18.311389   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:18.311658   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:18.311820   41012 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 16:02:18.314215   41012 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:18.314759   41012 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:02:18.314798   41012 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:18.314919   41012 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:02:18.315220   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:18.315255   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:18.328852   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35243
	I0625 16:02:18.329187   41012 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:18.329584   41012 main.go:141] libmachine: Using API Version  1
	I0625 16:02:18.329607   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:18.329887   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:18.330075   41012 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 16:02:18.330225   41012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:18.330248   41012 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 16:02:18.332528   41012 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:18.332928   41012 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:02:18.332958   41012 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:18.333089   41012 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 16:02:18.333238   41012 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 16:02:18.333374   41012 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 16:02:18.333513   41012 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	W0625 16:02:19.794714   41012 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:19.794781   41012 retry.go:31] will retry after 245.814421ms: dial tcp 192.168.39.53:22: connect: no route to host
	W0625 16:02:22.866704   41012 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.53:22: connect: no route to host
	W0625 16:02:22.866810   41012 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	E0625 16:02:22.866835   41012 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:22.866859   41012 status.go:257] ha-674765-m02 status: &{Name:ha-674765-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0625 16:02:22.866882   41012 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:22.866902   41012 status.go:255] checking status of ha-674765-m03 ...
	I0625 16:02:22.867306   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:22.867363   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:22.882410   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0625 16:02:22.882860   41012 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:22.883415   41012 main.go:141] libmachine: Using API Version  1
	I0625 16:02:22.883444   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:22.883745   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:22.883951   41012 main.go:141] libmachine: (ha-674765-m03) Calling .GetState
	I0625 16:02:22.885640   41012 status.go:330] ha-674765-m03 host status = "Running" (err=<nil>)
	I0625 16:02:22.885656   41012 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:22.885975   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:22.886018   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:22.901180   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38825
	I0625 16:02:22.901577   41012 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:22.902019   41012 main.go:141] libmachine: Using API Version  1
	I0625 16:02:22.902042   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:22.902336   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:22.902530   41012 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 16:02:22.905097   41012 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:22.905476   41012 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:22.905499   41012 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:22.905623   41012 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:22.905990   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:22.906032   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:22.919965   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45831
	I0625 16:02:22.920422   41012 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:22.920910   41012 main.go:141] libmachine: Using API Version  1
	I0625 16:02:22.920930   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:22.921235   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:22.921410   41012 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 16:02:22.921613   41012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:22.921635   41012 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 16:02:22.924310   41012 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:22.924722   41012 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:22.924747   41012 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:22.924913   41012 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 16:02:22.925079   41012 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 16:02:22.925230   41012 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 16:02:22.925395   41012 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 16:02:23.014075   41012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:23.028853   41012 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:23.028888   41012 api_server.go:166] Checking apiserver status ...
	I0625 16:02:23.028927   41012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:23.042522   41012 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup
	W0625 16:02:23.051506   41012 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:23.051550   41012 ssh_runner.go:195] Run: ls
	I0625 16:02:23.055868   41012 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:23.060214   41012 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:23.060239   41012 status.go:422] ha-674765-m03 apiserver status = Running (err=<nil>)
	I0625 16:02:23.060250   41012 status.go:257] ha-674765-m03 status: &{Name:ha-674765-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:23.060269   41012 status.go:255] checking status of ha-674765-m04 ...
	I0625 16:02:23.060572   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:23.060611   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:23.075179   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37499
	I0625 16:02:23.075561   41012 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:23.076161   41012 main.go:141] libmachine: Using API Version  1
	I0625 16:02:23.076181   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:23.076478   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:23.076658   41012 main.go:141] libmachine: (ha-674765-m04) Calling .GetState
	I0625 16:02:23.078125   41012 status.go:330] ha-674765-m04 host status = "Running" (err=<nil>)
	I0625 16:02:23.078139   41012 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:23.078431   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:23.078463   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:23.092493   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35223
	I0625 16:02:23.092842   41012 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:23.093280   41012 main.go:141] libmachine: Using API Version  1
	I0625 16:02:23.093298   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:23.093621   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:23.093794   41012 main.go:141] libmachine: (ha-674765-m04) Calling .GetIP
	I0625 16:02:23.096439   41012 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:23.096876   41012 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:23.096902   41012 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:23.097031   41012 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:23.097309   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:23.097340   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:23.112350   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
	I0625 16:02:23.112765   41012 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:23.113134   41012 main.go:141] libmachine: Using API Version  1
	I0625 16:02:23.113155   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:23.113473   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:23.113614   41012 main.go:141] libmachine: (ha-674765-m04) Calling .DriverName
	I0625 16:02:23.113797   41012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:23.113821   41012 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHHostname
	I0625 16:02:23.116416   41012 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:23.116807   41012 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:23.116827   41012 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:23.116987   41012 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHPort
	I0625 16:02:23.117160   41012 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHKeyPath
	I0625 16:02:23.117304   41012 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHUsername
	I0625 16:02:23.117462   41012 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m04/id_rsa Username:docker}
	I0625 16:02:23.197800   41012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:23.212615   41012 status.go:257] ha-674765-m04 status: &{Name:ha-674765-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr: exit status 3 (4.565422788s)

                                                
                                                
-- stdout --
	ha-674765
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-674765-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:02:25.011336   41113 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:02:25.011454   41113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:02:25.011463   41113 out.go:304] Setting ErrFile to fd 2...
	I0625 16:02:25.011467   41113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:02:25.011660   41113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:02:25.011861   41113 out.go:298] Setting JSON to false
	I0625 16:02:25.011885   41113 mustload.go:65] Loading cluster: ha-674765
	I0625 16:02:25.011982   41113 notify.go:220] Checking for updates...
	I0625 16:02:25.012323   41113 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:02:25.012342   41113 status.go:255] checking status of ha-674765 ...
	I0625 16:02:25.012768   41113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:25.012853   41113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:25.028423   41113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42633
	I0625 16:02:25.028765   41113 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:25.029446   41113 main.go:141] libmachine: Using API Version  1
	I0625 16:02:25.029484   41113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:25.029800   41113 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:25.029994   41113 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 16:02:25.031741   41113 status.go:330] ha-674765 host status = "Running" (err=<nil>)
	I0625 16:02:25.031757   41113 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:02:25.032141   41113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:25.032184   41113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:25.046353   41113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35307
	I0625 16:02:25.046694   41113 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:25.047083   41113 main.go:141] libmachine: Using API Version  1
	I0625 16:02:25.047109   41113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:25.047395   41113 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:25.047574   41113 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:02:25.050062   41113 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:25.050492   41113 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:02:25.050525   41113 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:25.050650   41113 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:02:25.050923   41113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:25.050963   41113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:25.064180   41113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43213
	I0625 16:02:25.064552   41113 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:25.065051   41113 main.go:141] libmachine: Using API Version  1
	I0625 16:02:25.065076   41113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:25.065384   41113 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:25.065550   41113 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:02:25.065714   41113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:25.065741   41113 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:02:25.068177   41113 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:25.068549   41113 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:02:25.068572   41113 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:25.068700   41113 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:02:25.068869   41113 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:02:25.068995   41113 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:02:25.069138   41113 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:02:25.150151   41113 ssh_runner.go:195] Run: systemctl --version
	I0625 16:02:25.156723   41113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:25.176257   41113 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:25.176284   41113 api_server.go:166] Checking apiserver status ...
	I0625 16:02:25.176323   41113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:25.202872   41113 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup
	W0625 16:02:25.214603   41113 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:25.214657   41113 ssh_runner.go:195] Run: ls
	I0625 16:02:25.220581   41113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:25.226118   41113 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:25.226142   41113 status.go:422] ha-674765 apiserver status = Running (err=<nil>)
	I0625 16:02:25.226155   41113 status.go:257] ha-674765 status: &{Name:ha-674765 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:25.226178   41113 status.go:255] checking status of ha-674765-m02 ...
	I0625 16:02:25.226598   41113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:25.226658   41113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:25.241050   41113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0625 16:02:25.241426   41113 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:25.241863   41113 main.go:141] libmachine: Using API Version  1
	I0625 16:02:25.241877   41113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:25.242170   41113 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:25.242387   41113 main.go:141] libmachine: (ha-674765-m02) Calling .GetState
	I0625 16:02:25.244082   41113 status.go:330] ha-674765-m02 host status = "Running" (err=<nil>)
	I0625 16:02:25.244098   41113 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:02:25.244357   41113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:25.244393   41113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:25.258101   41113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45207
	I0625 16:02:25.258433   41113 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:25.258820   41113 main.go:141] libmachine: Using API Version  1
	I0625 16:02:25.258839   41113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:25.259166   41113 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:25.259338   41113 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 16:02:25.261805   41113 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:25.262167   41113 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:02:25.262195   41113 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:25.262316   41113 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:02:25.262722   41113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:25.262765   41113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:25.276902   41113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
	I0625 16:02:25.277227   41113 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:25.277608   41113 main.go:141] libmachine: Using API Version  1
	I0625 16:02:25.277627   41113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:25.277954   41113 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:25.278144   41113 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 16:02:25.278316   41113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:25.278336   41113 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 16:02:25.280558   41113 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:25.281017   41113 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:02:25.281041   41113 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:25.281175   41113 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 16:02:25.281326   41113 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 16:02:25.281471   41113 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 16:02:25.281582   41113 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	W0625 16:02:25.942689   41113 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:25.942750   41113 retry.go:31] will retry after 177.125957ms: dial tcp 192.168.39.53:22: connect: no route to host
	W0625 16:02:29.170697   41113 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.53:22: connect: no route to host
	W0625 16:02:29.170796   41113 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	E0625 16:02:29.170814   41113 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:29.170833   41113 status.go:257] ha-674765-m02 status: &{Name:ha-674765-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0625 16:02:29.170851   41113 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:29.170862   41113 status.go:255] checking status of ha-674765-m03 ...
	I0625 16:02:29.171173   41113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:29.171220   41113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:29.186767   41113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32905
	I0625 16:02:29.187206   41113 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:29.187655   41113 main.go:141] libmachine: Using API Version  1
	I0625 16:02:29.187676   41113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:29.187999   41113 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:29.188180   41113 main.go:141] libmachine: (ha-674765-m03) Calling .GetState
	I0625 16:02:29.189671   41113 status.go:330] ha-674765-m03 host status = "Running" (err=<nil>)
	I0625 16:02:29.189691   41113 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:29.190057   41113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:29.190094   41113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:29.203669   41113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35663
	I0625 16:02:29.204055   41113 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:29.204479   41113 main.go:141] libmachine: Using API Version  1
	I0625 16:02:29.204497   41113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:29.204753   41113 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:29.204924   41113 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 16:02:29.207981   41113 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:29.208346   41113 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:29.208371   41113 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:29.208471   41113 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:29.208800   41113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:29.208839   41113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:29.223354   41113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45509
	I0625 16:02:29.223846   41113 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:29.224336   41113 main.go:141] libmachine: Using API Version  1
	I0625 16:02:29.224362   41113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:29.224697   41113 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:29.224874   41113 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 16:02:29.225062   41113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:29.225084   41113 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 16:02:29.227570   41113 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:29.227956   41113 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:29.227983   41113 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:29.228211   41113 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 16:02:29.228381   41113 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 16:02:29.228541   41113 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 16:02:29.228687   41113 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 16:02:29.318200   41113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:29.339789   41113 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:29.339812   41113 api_server.go:166] Checking apiserver status ...
	I0625 16:02:29.339839   41113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:29.356822   41113 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup
	W0625 16:02:29.368079   41113 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:29.368121   41113 ssh_runner.go:195] Run: ls
	I0625 16:02:29.373597   41113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:29.379766   41113 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:29.379789   41113 status.go:422] ha-674765-m03 apiserver status = Running (err=<nil>)
	I0625 16:02:29.379800   41113 status.go:257] ha-674765-m03 status: &{Name:ha-674765-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:29.379817   41113 status.go:255] checking status of ha-674765-m04 ...
	I0625 16:02:29.380204   41113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:29.380242   41113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:29.395766   41113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41987
	I0625 16:02:29.396172   41113 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:29.396616   41113 main.go:141] libmachine: Using API Version  1
	I0625 16:02:29.396660   41113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:29.396955   41113 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:29.397164   41113 main.go:141] libmachine: (ha-674765-m04) Calling .GetState
	I0625 16:02:29.398634   41113 status.go:330] ha-674765-m04 host status = "Running" (err=<nil>)
	I0625 16:02:29.398647   41113 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:29.398924   41113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:29.398954   41113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:29.413363   41113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0625 16:02:29.413729   41113 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:29.414225   41113 main.go:141] libmachine: Using API Version  1
	I0625 16:02:29.414243   41113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:29.414576   41113 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:29.414783   41113 main.go:141] libmachine: (ha-674765-m04) Calling .GetIP
	I0625 16:02:29.417507   41113 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:29.418019   41113 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:29.418052   41113 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:29.418205   41113 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:29.418535   41113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:29.418568   41113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:29.432679   41113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33413
	I0625 16:02:29.433023   41113 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:29.433400   41113 main.go:141] libmachine: Using API Version  1
	I0625 16:02:29.433425   41113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:29.433704   41113 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:29.433869   41113 main.go:141] libmachine: (ha-674765-m04) Calling .DriverName
	I0625 16:02:29.434023   41113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:29.434047   41113 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHHostname
	I0625 16:02:29.436490   41113 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:29.436853   41113 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:29.436873   41113 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:29.437006   41113 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHPort
	I0625 16:02:29.437135   41113 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHKeyPath
	I0625 16:02:29.437269   41113 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHUsername
	I0625 16:02:29.437394   41113 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m04/id_rsa Username:docker}
	I0625 16:02:29.518101   41113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:29.533174   41113 status.go:257] ha-674765-m04 status: &{Name:ha-674765-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr: exit status 3 (4.294209915s)

                                                
                                                
-- stdout --
	ha-674765
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-674765-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:02:31.612252   41229 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:02:31.612409   41229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:02:31.612419   41229 out.go:304] Setting ErrFile to fd 2...
	I0625 16:02:31.612423   41229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:02:31.612605   41229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:02:31.612800   41229 out.go:298] Setting JSON to false
	I0625 16:02:31.612827   41229 mustload.go:65] Loading cluster: ha-674765
	I0625 16:02:31.612953   41229 notify.go:220] Checking for updates...
	I0625 16:02:31.613288   41229 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:02:31.613308   41229 status.go:255] checking status of ha-674765 ...
	I0625 16:02:31.613770   41229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:31.613829   41229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:31.633775   41229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
	I0625 16:02:31.634177   41229 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:31.634964   41229 main.go:141] libmachine: Using API Version  1
	I0625 16:02:31.634997   41229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:31.635367   41229 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:31.635555   41229 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 16:02:31.637008   41229 status.go:330] ha-674765 host status = "Running" (err=<nil>)
	I0625 16:02:31.637023   41229 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:02:31.637319   41229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:31.637355   41229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:31.651929   41229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33955
	I0625 16:02:31.652337   41229 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:31.652767   41229 main.go:141] libmachine: Using API Version  1
	I0625 16:02:31.652788   41229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:31.653110   41229 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:31.653327   41229 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:02:31.656104   41229 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:31.656575   41229 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:02:31.656611   41229 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:31.656749   41229 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:02:31.657065   41229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:31.657105   41229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:31.672191   41229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0625 16:02:31.672632   41229 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:31.673194   41229 main.go:141] libmachine: Using API Version  1
	I0625 16:02:31.673224   41229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:31.673595   41229 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:31.673784   41229 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:02:31.673988   41229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:31.674015   41229 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:02:31.676857   41229 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:31.677354   41229 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:02:31.677377   41229 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:31.677513   41229 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:02:31.677683   41229 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:02:31.677824   41229 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:02:31.677959   41229 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:02:31.765877   41229 ssh_runner.go:195] Run: systemctl --version
	I0625 16:02:31.772784   41229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:31.791076   41229 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:31.791101   41229 api_server.go:166] Checking apiserver status ...
	I0625 16:02:31.791138   41229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:31.805914   41229 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup
	W0625 16:02:31.815223   41229 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:31.815271   41229 ssh_runner.go:195] Run: ls
	I0625 16:02:31.820490   41229 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:31.824802   41229 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:31.824824   41229 status.go:422] ha-674765 apiserver status = Running (err=<nil>)
	I0625 16:02:31.824836   41229 status.go:257] ha-674765 status: &{Name:ha-674765 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:31.824859   41229 status.go:255] checking status of ha-674765-m02 ...
	I0625 16:02:31.825152   41229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:31.825194   41229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:31.840986   41229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45495
	I0625 16:02:31.841385   41229 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:31.841851   41229 main.go:141] libmachine: Using API Version  1
	I0625 16:02:31.841872   41229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:31.842191   41229 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:31.842388   41229 main.go:141] libmachine: (ha-674765-m02) Calling .GetState
	I0625 16:02:31.844064   41229 status.go:330] ha-674765-m02 host status = "Running" (err=<nil>)
	I0625 16:02:31.844082   41229 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:02:31.844399   41229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:31.844432   41229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:31.859084   41229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0625 16:02:31.859464   41229 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:31.859901   41229 main.go:141] libmachine: Using API Version  1
	I0625 16:02:31.859922   41229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:31.860243   41229 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:31.860421   41229 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 16:02:31.863288   41229 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:31.863771   41229 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:02:31.863791   41229 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:31.863933   41229 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:02:31.864322   41229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:31.864365   41229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:31.878517   41229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33457
	I0625 16:02:31.878884   41229 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:31.879322   41229 main.go:141] libmachine: Using API Version  1
	I0625 16:02:31.879350   41229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:31.879601   41229 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:31.879751   41229 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 16:02:31.879936   41229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:31.879953   41229 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 16:02:31.882180   41229 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:31.882591   41229 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:02:31.882616   41229 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:31.882752   41229 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 16:02:31.882894   41229 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 16:02:31.883065   41229 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 16:02:31.883203   41229 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	W0625 16:02:32.242761   41229 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:32.242806   41229 retry.go:31] will retry after 201.247987ms: dial tcp 192.168.39.53:22: connect: no route to host
	W0625 16:02:35.510706   41229 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.53:22: connect: no route to host
	W0625 16:02:35.510801   41229 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	E0625 16:02:35.510824   41229 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:35.510838   41229 status.go:257] ha-674765-m02 status: &{Name:ha-674765-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0625 16:02:35.510876   41229 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:35.510883   41229 status.go:255] checking status of ha-674765-m03 ...
	I0625 16:02:35.511175   41229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:35.511218   41229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:35.525862   41229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44073
	I0625 16:02:35.526218   41229 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:35.526747   41229 main.go:141] libmachine: Using API Version  1
	I0625 16:02:35.526767   41229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:35.527148   41229 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:35.527314   41229 main.go:141] libmachine: (ha-674765-m03) Calling .GetState
	I0625 16:02:35.528850   41229 status.go:330] ha-674765-m03 host status = "Running" (err=<nil>)
	I0625 16:02:35.528865   41229 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:35.529172   41229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:35.529217   41229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:35.543451   41229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35841
	I0625 16:02:35.543902   41229 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:35.544354   41229 main.go:141] libmachine: Using API Version  1
	I0625 16:02:35.544387   41229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:35.544689   41229 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:35.544824   41229 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 16:02:35.547447   41229 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:35.547821   41229 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:35.547844   41229 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:35.548000   41229 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:35.548289   41229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:35.548326   41229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:35.563168   41229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0625 16:02:35.563536   41229 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:35.563938   41229 main.go:141] libmachine: Using API Version  1
	I0625 16:02:35.563957   41229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:35.564248   41229 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:35.564425   41229 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 16:02:35.564611   41229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:35.564630   41229 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 16:02:35.567449   41229 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:35.567839   41229 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:35.567868   41229 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:35.567997   41229 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 16:02:35.568156   41229 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 16:02:35.568348   41229 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 16:02:35.568503   41229 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 16:02:35.654848   41229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:35.673370   41229 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:35.673397   41229 api_server.go:166] Checking apiserver status ...
	I0625 16:02:35.673429   41229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:35.689211   41229 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup
	W0625 16:02:35.700424   41229 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:35.700466   41229 ssh_runner.go:195] Run: ls
	I0625 16:02:35.705342   41229 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:35.712069   41229 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:35.712097   41229 status.go:422] ha-674765-m03 apiserver status = Running (err=<nil>)
	I0625 16:02:35.712108   41229 status.go:257] ha-674765-m03 status: &{Name:ha-674765-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:35.712127   41229 status.go:255] checking status of ha-674765-m04 ...
	I0625 16:02:35.712412   41229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:35.712446   41229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:35.727011   41229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I0625 16:02:35.727443   41229 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:35.727868   41229 main.go:141] libmachine: Using API Version  1
	I0625 16:02:35.727901   41229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:35.728177   41229 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:35.728368   41229 main.go:141] libmachine: (ha-674765-m04) Calling .GetState
	I0625 16:02:35.729703   41229 status.go:330] ha-674765-m04 host status = "Running" (err=<nil>)
	I0625 16:02:35.729719   41229 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:35.729982   41229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:35.730013   41229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:35.744362   41229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34789
	I0625 16:02:35.744749   41229 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:35.745232   41229 main.go:141] libmachine: Using API Version  1
	I0625 16:02:35.745258   41229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:35.745567   41229 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:35.745725   41229 main.go:141] libmachine: (ha-674765-m04) Calling .GetIP
	I0625 16:02:35.748852   41229 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:35.749313   41229 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:35.749349   41229 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:35.749507   41229 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:35.749924   41229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:35.749968   41229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:35.765054   41229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41327
	I0625 16:02:35.765457   41229 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:35.765949   41229 main.go:141] libmachine: Using API Version  1
	I0625 16:02:35.765975   41229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:35.766343   41229 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:35.766551   41229 main.go:141] libmachine: (ha-674765-m04) Calling .DriverName
	I0625 16:02:35.766735   41229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:35.766758   41229 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHHostname
	I0625 16:02:35.769557   41229 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:35.770000   41229 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:35.770018   41229 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:35.770161   41229 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHPort
	I0625 16:02:35.770325   41229 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHKeyPath
	I0625 16:02:35.770501   41229 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHUsername
	I0625 16:02:35.770633   41229 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m04/id_rsa Username:docker}
	I0625 16:02:35.850725   41229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:35.866633   41229 status.go:257] ha-674765-m04 status: &{Name:ha-674765-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr: exit status 3 (4.094554019s)

                                                
                                                
-- stdout --
	ha-674765
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-674765-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:02:38.138902   41330 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:02:38.139166   41330 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:02:38.139177   41330 out.go:304] Setting ErrFile to fd 2...
	I0625 16:02:38.139182   41330 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:02:38.139376   41330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:02:38.139557   41330 out.go:298] Setting JSON to false
	I0625 16:02:38.139583   41330 mustload.go:65] Loading cluster: ha-674765
	I0625 16:02:38.139690   41330 notify.go:220] Checking for updates...
	I0625 16:02:38.139974   41330 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:02:38.139989   41330 status.go:255] checking status of ha-674765 ...
	I0625 16:02:38.140347   41330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:38.140408   41330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:38.166960   41330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46159
	I0625 16:02:38.167402   41330 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:38.168057   41330 main.go:141] libmachine: Using API Version  1
	I0625 16:02:38.168081   41330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:38.168496   41330 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:38.168681   41330 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 16:02:38.170305   41330 status.go:330] ha-674765 host status = "Running" (err=<nil>)
	I0625 16:02:38.170323   41330 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:02:38.170635   41330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:38.170681   41330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:38.186659   41330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44299
	I0625 16:02:38.187076   41330 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:38.187678   41330 main.go:141] libmachine: Using API Version  1
	I0625 16:02:38.187700   41330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:38.188001   41330 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:38.188183   41330 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:02:38.190776   41330 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:38.191200   41330 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:02:38.191222   41330 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:38.191343   41330 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:02:38.191673   41330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:38.191714   41330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:38.206303   41330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38883
	I0625 16:02:38.206687   41330 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:38.207200   41330 main.go:141] libmachine: Using API Version  1
	I0625 16:02:38.207227   41330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:38.207499   41330 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:38.207658   41330 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:02:38.207821   41330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:38.207851   41330 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:02:38.210564   41330 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:38.210968   41330 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:02:38.210999   41330 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:38.211149   41330 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:02:38.211291   41330 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:02:38.211422   41330 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:02:38.211573   41330 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:02:38.294902   41330 ssh_runner.go:195] Run: systemctl --version
	I0625 16:02:38.300838   41330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:38.315946   41330 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:38.315982   41330 api_server.go:166] Checking apiserver status ...
	I0625 16:02:38.316023   41330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:38.331231   41330 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup
	W0625 16:02:38.340424   41330 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:38.340466   41330 ssh_runner.go:195] Run: ls
	I0625 16:02:38.344642   41330 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:38.349096   41330 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:38.349114   41330 status.go:422] ha-674765 apiserver status = Running (err=<nil>)
	I0625 16:02:38.349122   41330 status.go:257] ha-674765 status: &{Name:ha-674765 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:38.349137   41330 status.go:255] checking status of ha-674765-m02 ...
	I0625 16:02:38.349440   41330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:38.349477   41330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:38.363840   41330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40227
	I0625 16:02:38.364246   41330 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:38.364734   41330 main.go:141] libmachine: Using API Version  1
	I0625 16:02:38.364758   41330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:38.365057   41330 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:38.365238   41330 main.go:141] libmachine: (ha-674765-m02) Calling .GetState
	I0625 16:02:38.366782   41330 status.go:330] ha-674765-m02 host status = "Running" (err=<nil>)
	I0625 16:02:38.366795   41330 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:02:38.367056   41330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:38.367091   41330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:38.381849   41330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32867
	I0625 16:02:38.382280   41330 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:38.382738   41330 main.go:141] libmachine: Using API Version  1
	I0625 16:02:38.382757   41330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:38.383117   41330 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:38.383301   41330 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 16:02:38.385927   41330 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:38.386380   41330 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:02:38.386418   41330 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:38.386577   41330 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:02:38.386977   41330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:38.387022   41330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:38.401694   41330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43689
	I0625 16:02:38.402175   41330 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:38.402649   41330 main.go:141] libmachine: Using API Version  1
	I0625 16:02:38.402668   41330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:38.402943   41330 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:38.403111   41330 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 16:02:38.403331   41330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:38.403350   41330 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 16:02:38.406109   41330 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:38.406742   41330 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:02:38.406771   41330 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:38.406876   41330 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 16:02:38.407042   41330 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 16:02:38.407203   41330 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 16:02:38.407348   41330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	W0625 16:02:38.578671   41330 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:38.578727   41330 retry.go:31] will retry after 199.067862ms: dial tcp 192.168.39.53:22: connect: no route to host
	W0625 16:02:41.842821   41330 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.53:22: connect: no route to host
	W0625 16:02:41.842900   41330 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	E0625 16:02:41.842936   41330 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:41.842952   41330 status.go:257] ha-674765-m02 status: &{Name:ha-674765-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0625 16:02:41.842975   41330 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:41.842985   41330 status.go:255] checking status of ha-674765-m03 ...
	I0625 16:02:41.843287   41330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:41.843344   41330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:41.858683   41330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
	I0625 16:02:41.859196   41330 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:41.859814   41330 main.go:141] libmachine: Using API Version  1
	I0625 16:02:41.859840   41330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:41.860174   41330 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:41.860352   41330 main.go:141] libmachine: (ha-674765-m03) Calling .GetState
	I0625 16:02:41.862012   41330 status.go:330] ha-674765-m03 host status = "Running" (err=<nil>)
	I0625 16:02:41.862028   41330 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:41.862321   41330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:41.862384   41330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:41.877421   41330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39905
	I0625 16:02:41.877781   41330 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:41.878241   41330 main.go:141] libmachine: Using API Version  1
	I0625 16:02:41.878263   41330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:41.878571   41330 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:41.878723   41330 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 16:02:41.881468   41330 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:41.881865   41330 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:41.881891   41330 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:41.882032   41330 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:41.882373   41330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:41.882408   41330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:41.896780   41330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34193
	I0625 16:02:41.897198   41330 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:41.897659   41330 main.go:141] libmachine: Using API Version  1
	I0625 16:02:41.897683   41330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:41.897937   41330 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:41.898126   41330 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 16:02:41.898275   41330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:41.898295   41330 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 16:02:41.900710   41330 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:41.901099   41330 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:41.901119   41330 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:41.901210   41330 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 16:02:41.901530   41330 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 16:02:41.901678   41330 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 16:02:41.901777   41330 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 16:02:41.986819   41330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:42.004354   41330 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:42.004378   41330 api_server.go:166] Checking apiserver status ...
	I0625 16:02:42.004408   41330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:42.017857   41330 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup
	W0625 16:02:42.028205   41330 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:42.028250   41330 ssh_runner.go:195] Run: ls
	I0625 16:02:42.033254   41330 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:42.037783   41330 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:42.037802   41330 status.go:422] ha-674765-m03 apiserver status = Running (err=<nil>)
	I0625 16:02:42.037813   41330 status.go:257] ha-674765-m03 status: &{Name:ha-674765-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:42.037833   41330 status.go:255] checking status of ha-674765-m04 ...
	I0625 16:02:42.038238   41330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:42.038282   41330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:42.052822   41330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42679
	I0625 16:02:42.053235   41330 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:42.053746   41330 main.go:141] libmachine: Using API Version  1
	I0625 16:02:42.053766   41330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:42.054077   41330 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:42.054285   41330 main.go:141] libmachine: (ha-674765-m04) Calling .GetState
	I0625 16:02:42.055700   41330 status.go:330] ha-674765-m04 host status = "Running" (err=<nil>)
	I0625 16:02:42.055714   41330 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:42.056022   41330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:42.056074   41330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:42.071899   41330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39835
	I0625 16:02:42.072385   41330 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:42.072952   41330 main.go:141] libmachine: Using API Version  1
	I0625 16:02:42.072979   41330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:42.073296   41330 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:42.073527   41330 main.go:141] libmachine: (ha-674765-m04) Calling .GetIP
	I0625 16:02:42.077215   41330 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:42.077718   41330 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:42.078166   41330 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:42.077931   41330 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:42.078501   41330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:42.078540   41330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:42.093242   41330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0625 16:02:42.093613   41330 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:42.094057   41330 main.go:141] libmachine: Using API Version  1
	I0625 16:02:42.094084   41330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:42.094413   41330 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:42.094611   41330 main.go:141] libmachine: (ha-674765-m04) Calling .DriverName
	I0625 16:02:42.094819   41330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:42.094839   41330 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHHostname
	I0625 16:02:42.097454   41330 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:42.097834   41330 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:42.097860   41330 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:42.098026   41330 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHPort
	I0625 16:02:42.098203   41330 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHKeyPath
	I0625 16:02:42.098354   41330 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHUsername
	I0625 16:02:42.098501   41330 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m04/id_rsa Username:docker}
	I0625 16:02:42.178952   41330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:42.193249   41330 status.go:257] ha-674765-m04 status: &{Name:ha-674765-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr: exit status 3 (3.702057215s)

                                                
                                                
-- stdout --
	ha-674765
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-674765-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:02:49.384706   41453 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:02:49.384911   41453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:02:49.384943   41453 out.go:304] Setting ErrFile to fd 2...
	I0625 16:02:49.384961   41453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:02:49.385457   41453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:02:49.385636   41453 out.go:298] Setting JSON to false
	I0625 16:02:49.385656   41453 mustload.go:65] Loading cluster: ha-674765
	I0625 16:02:49.385693   41453 notify.go:220] Checking for updates...
	I0625 16:02:49.386016   41453 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:02:49.386028   41453 status.go:255] checking status of ha-674765 ...
	I0625 16:02:49.386370   41453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:49.386425   41453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:49.400698   41453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42999
	I0625 16:02:49.401126   41453 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:49.401823   41453 main.go:141] libmachine: Using API Version  1
	I0625 16:02:49.401842   41453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:49.402209   41453 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:49.402411   41453 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 16:02:49.403961   41453 status.go:330] ha-674765 host status = "Running" (err=<nil>)
	I0625 16:02:49.403980   41453 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:02:49.404277   41453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:49.404309   41453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:49.420757   41453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32981
	I0625 16:02:49.421194   41453 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:49.421652   41453 main.go:141] libmachine: Using API Version  1
	I0625 16:02:49.421670   41453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:49.421966   41453 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:49.422158   41453 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:02:49.424922   41453 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:49.425407   41453 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:02:49.425438   41453 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:49.425586   41453 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:02:49.425864   41453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:49.425903   41453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:49.440357   41453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38271
	I0625 16:02:49.440661   41453 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:49.441104   41453 main.go:141] libmachine: Using API Version  1
	I0625 16:02:49.441129   41453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:49.441448   41453 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:49.441646   41453 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:02:49.441852   41453 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:49.441879   41453 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:02:49.444395   41453 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:49.444799   41453 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:02:49.444831   41453 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:49.444885   41453 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:02:49.445004   41453 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:02:49.445167   41453 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:02:49.445298   41453 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:02:49.526048   41453 ssh_runner.go:195] Run: systemctl --version
	I0625 16:02:49.532658   41453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:49.548198   41453 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:49.548227   41453 api_server.go:166] Checking apiserver status ...
	I0625 16:02:49.548254   41453 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:49.562126   41453 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup
	W0625 16:02:49.572362   41453 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:49.572398   41453 ssh_runner.go:195] Run: ls
	I0625 16:02:49.577344   41453 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:49.581438   41453 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:49.581459   41453 status.go:422] ha-674765 apiserver status = Running (err=<nil>)
	I0625 16:02:49.581467   41453 status.go:257] ha-674765 status: &{Name:ha-674765 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:49.581488   41453 status.go:255] checking status of ha-674765-m02 ...
	I0625 16:02:49.581758   41453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:49.581790   41453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:49.596375   41453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
	I0625 16:02:49.596795   41453 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:49.597290   41453 main.go:141] libmachine: Using API Version  1
	I0625 16:02:49.597310   41453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:49.597649   41453 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:49.597813   41453 main.go:141] libmachine: (ha-674765-m02) Calling .GetState
	I0625 16:02:49.599314   41453 status.go:330] ha-674765-m02 host status = "Running" (err=<nil>)
	I0625 16:02:49.599338   41453 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:02:49.599660   41453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:49.599709   41453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:49.614075   41453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36425
	I0625 16:02:49.614519   41453 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:49.615097   41453 main.go:141] libmachine: Using API Version  1
	I0625 16:02:49.615117   41453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:49.615402   41453 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:49.615597   41453 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 16:02:49.618577   41453 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:49.619149   41453 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:02:49.619178   41453 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:49.619306   41453 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:02:49.619622   41453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:49.619654   41453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:49.634045   41453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
	I0625 16:02:49.634483   41453 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:49.634985   41453 main.go:141] libmachine: Using API Version  1
	I0625 16:02:49.635006   41453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:49.635261   41453 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:49.635429   41453 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 16:02:49.635594   41453 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:49.635612   41453 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 16:02:49.638330   41453 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:49.638745   41453 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:02:49.638764   41453 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:02:49.638903   41453 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 16:02:49.639051   41453 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 16:02:49.639212   41453 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 16:02:49.639360   41453 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	W0625 16:02:52.690677   41453 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.53:22: connect: no route to host
	W0625 16:02:52.690780   41453 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	E0625 16:02:52.690802   41453 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:52.690813   41453 status.go:257] ha-674765-m02 status: &{Name:ha-674765-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0625 16:02:52.690842   41453 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	I0625 16:02:52.690856   41453 status.go:255] checking status of ha-674765-m03 ...
	I0625 16:02:52.691174   41453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:52.691235   41453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:52.705588   41453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40683
	I0625 16:02:52.706068   41453 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:52.706568   41453 main.go:141] libmachine: Using API Version  1
	I0625 16:02:52.706590   41453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:52.706908   41453 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:52.707078   41453 main.go:141] libmachine: (ha-674765-m03) Calling .GetState
	I0625 16:02:52.708752   41453 status.go:330] ha-674765-m03 host status = "Running" (err=<nil>)
	I0625 16:02:52.708767   41453 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:52.709040   41453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:52.709071   41453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:52.723099   41453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42577
	I0625 16:02:52.723530   41453 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:52.724012   41453 main.go:141] libmachine: Using API Version  1
	I0625 16:02:52.724031   41453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:52.724354   41453 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:52.724529   41453 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 16:02:52.727357   41453 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:52.727792   41453 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:52.727819   41453 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:52.727921   41453 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:52.728236   41453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:52.728275   41453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:52.742011   41453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41263
	I0625 16:02:52.742437   41453 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:52.742902   41453 main.go:141] libmachine: Using API Version  1
	I0625 16:02:52.742923   41453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:52.743206   41453 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:52.743385   41453 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 16:02:52.743543   41453 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:52.743564   41453 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 16:02:52.745917   41453 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:52.746365   41453 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:52.746421   41453 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:52.746585   41453 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 16:02:52.746747   41453 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 16:02:52.746877   41453 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 16:02:52.747030   41453 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 16:02:52.830396   41453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:52.848016   41453 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:52.848043   41453 api_server.go:166] Checking apiserver status ...
	I0625 16:02:52.848083   41453 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:52.866555   41453 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup
	W0625 16:02:52.877616   41453 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:52.877679   41453 ssh_runner.go:195] Run: ls
	I0625 16:02:52.882354   41453 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:52.886736   41453 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:52.886759   41453 status.go:422] ha-674765-m03 apiserver status = Running (err=<nil>)
	I0625 16:02:52.886769   41453 status.go:257] ha-674765-m03 status: &{Name:ha-674765-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:52.886787   41453 status.go:255] checking status of ha-674765-m04 ...
	I0625 16:02:52.887091   41453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:52.887123   41453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:52.901877   41453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36651
	I0625 16:02:52.902313   41453 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:52.902828   41453 main.go:141] libmachine: Using API Version  1
	I0625 16:02:52.902854   41453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:52.903177   41453 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:52.903375   41453 main.go:141] libmachine: (ha-674765-m04) Calling .GetState
	I0625 16:02:52.904822   41453 status.go:330] ha-674765-m04 host status = "Running" (err=<nil>)
	I0625 16:02:52.904837   41453 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:52.905120   41453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:52.905158   41453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:52.919948   41453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34615
	I0625 16:02:52.920296   41453 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:52.920737   41453 main.go:141] libmachine: Using API Version  1
	I0625 16:02:52.920760   41453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:52.921055   41453 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:52.921228   41453 main.go:141] libmachine: (ha-674765-m04) Calling .GetIP
	I0625 16:02:52.924165   41453 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:52.924592   41453 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:52.924621   41453 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:52.924815   41453 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:52.925120   41453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:52.925153   41453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:52.939875   41453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I0625 16:02:52.940198   41453 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:52.940613   41453 main.go:141] libmachine: Using API Version  1
	I0625 16:02:52.940638   41453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:52.940945   41453 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:52.941118   41453 main.go:141] libmachine: (ha-674765-m04) Calling .DriverName
	I0625 16:02:52.941301   41453 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:52.941327   41453 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHHostname
	I0625 16:02:52.943982   41453 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:52.944361   41453 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:52.944386   41453 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:52.944515   41453 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHPort
	I0625 16:02:52.944686   41453 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHKeyPath
	I0625 16:02:52.944790   41453 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHUsername
	I0625 16:02:52.944897   41453 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m04/id_rsa Username:docker}
	I0625 16:02:53.026577   41453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:53.040818   41453 status.go:257] ha-674765-m04 status: &{Name:ha-674765-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr: exit status 7 (619.452821ms)

                                                
                                                
-- stdout --
	ha-674765
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-674765-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:02:57.349137   41589 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:02:57.349349   41589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:02:57.349357   41589 out.go:304] Setting ErrFile to fd 2...
	I0625 16:02:57.349361   41589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:02:57.349541   41589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:02:57.349686   41589 out.go:298] Setting JSON to false
	I0625 16:02:57.349706   41589 mustload.go:65] Loading cluster: ha-674765
	I0625 16:02:57.349747   41589 notify.go:220] Checking for updates...
	I0625 16:02:57.350042   41589 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:02:57.350055   41589 status.go:255] checking status of ha-674765 ...
	I0625 16:02:57.350461   41589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:57.350553   41589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:57.365504   41589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38713
	I0625 16:02:57.365969   41589 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:57.366654   41589 main.go:141] libmachine: Using API Version  1
	I0625 16:02:57.366700   41589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:57.366983   41589 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:57.367163   41589 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 16:02:57.368613   41589 status.go:330] ha-674765 host status = "Running" (err=<nil>)
	I0625 16:02:57.368630   41589 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:02:57.368888   41589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:57.368931   41589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:57.383120   41589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I0625 16:02:57.383457   41589 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:57.383854   41589 main.go:141] libmachine: Using API Version  1
	I0625 16:02:57.383872   41589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:57.384176   41589 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:57.384362   41589 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:02:57.386862   41589 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:57.387244   41589 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:02:57.387269   41589 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:57.387406   41589 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:02:57.387687   41589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:57.387718   41589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:57.402322   41589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I0625 16:02:57.402774   41589 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:57.403254   41589 main.go:141] libmachine: Using API Version  1
	I0625 16:02:57.403279   41589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:57.403652   41589 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:57.403886   41589 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:02:57.404070   41589 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:57.404116   41589 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:02:57.406635   41589 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:57.407035   41589 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:02:57.407066   41589 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:02:57.407166   41589 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:02:57.407332   41589 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:02:57.407476   41589 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:02:57.407657   41589 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:02:57.490479   41589 ssh_runner.go:195] Run: systemctl --version
	I0625 16:02:57.497125   41589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:57.512259   41589 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:57.512285   41589 api_server.go:166] Checking apiserver status ...
	I0625 16:02:57.512314   41589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:57.526494   41589 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup
	W0625 16:02:57.536349   41589 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:57.536415   41589 ssh_runner.go:195] Run: ls
	I0625 16:02:57.540879   41589 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:57.547526   41589 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:57.547556   41589 status.go:422] ha-674765 apiserver status = Running (err=<nil>)
	I0625 16:02:57.547581   41589 status.go:257] ha-674765 status: &{Name:ha-674765 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:57.547605   41589 status.go:255] checking status of ha-674765-m02 ...
	I0625 16:02:57.547980   41589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:57.548041   41589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:57.562955   41589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36585
	I0625 16:02:57.563334   41589 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:57.563788   41589 main.go:141] libmachine: Using API Version  1
	I0625 16:02:57.563806   41589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:57.564158   41589 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:57.564345   41589 main.go:141] libmachine: (ha-674765-m02) Calling .GetState
	I0625 16:02:57.565873   41589 status.go:330] ha-674765-m02 host status = "Stopped" (err=<nil>)
	I0625 16:02:57.565889   41589 status.go:343] host is not running, skipping remaining checks
	I0625 16:02:57.565897   41589 status.go:257] ha-674765-m02 status: &{Name:ha-674765-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:57.565916   41589 status.go:255] checking status of ha-674765-m03 ...
	I0625 16:02:57.566324   41589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:57.566370   41589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:57.580806   41589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40939
	I0625 16:02:57.581235   41589 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:57.581699   41589 main.go:141] libmachine: Using API Version  1
	I0625 16:02:57.581720   41589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:57.581972   41589 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:57.582150   41589 main.go:141] libmachine: (ha-674765-m03) Calling .GetState
	I0625 16:02:57.583449   41589 status.go:330] ha-674765-m03 host status = "Running" (err=<nil>)
	I0625 16:02:57.583467   41589 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:57.583744   41589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:57.583778   41589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:57.597541   41589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0625 16:02:57.597870   41589 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:57.598273   41589 main.go:141] libmachine: Using API Version  1
	I0625 16:02:57.598291   41589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:57.598633   41589 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:57.598821   41589 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 16:02:57.601253   41589 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:57.601607   41589 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:57.601633   41589 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:57.601802   41589 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:02:57.602065   41589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:57.602103   41589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:57.616951   41589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I0625 16:02:57.617276   41589 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:57.617684   41589 main.go:141] libmachine: Using API Version  1
	I0625 16:02:57.617702   41589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:57.618009   41589 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:57.618170   41589 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 16:02:57.618336   41589 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:57.618360   41589 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 16:02:57.620899   41589 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:57.621292   41589 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:02:57.621318   41589 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:02:57.621427   41589 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 16:02:57.621577   41589 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 16:02:57.621732   41589 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 16:02:57.621863   41589 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 16:02:57.707408   41589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:57.725271   41589 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:02:57.725303   41589 api_server.go:166] Checking apiserver status ...
	I0625 16:02:57.725349   41589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:02:57.742158   41589 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup
	W0625 16:02:57.759435   41589 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:02:57.759497   41589 ssh_runner.go:195] Run: ls
	I0625 16:02:57.767355   41589 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:02:57.771556   41589 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:02:57.771575   41589 status.go:422] ha-674765-m03 apiserver status = Running (err=<nil>)
	I0625 16:02:57.771583   41589 status.go:257] ha-674765-m03 status: &{Name:ha-674765-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:02:57.771595   41589 status.go:255] checking status of ha-674765-m04 ...
	I0625 16:02:57.771870   41589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:57.771902   41589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:57.786688   41589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0625 16:02:57.787141   41589 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:57.787672   41589 main.go:141] libmachine: Using API Version  1
	I0625 16:02:57.787696   41589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:57.788043   41589 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:57.788240   41589 main.go:141] libmachine: (ha-674765-m04) Calling .GetState
	I0625 16:02:57.789817   41589 status.go:330] ha-674765-m04 host status = "Running" (err=<nil>)
	I0625 16:02:57.789833   41589 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:57.790097   41589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:57.790133   41589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:57.804682   41589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38503
	I0625 16:02:57.805055   41589 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:57.805480   41589 main.go:141] libmachine: Using API Version  1
	I0625 16:02:57.805507   41589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:57.805799   41589 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:57.805985   41589 main.go:141] libmachine: (ha-674765-m04) Calling .GetIP
	I0625 16:02:57.808777   41589 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:57.809173   41589 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:57.809197   41589 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:57.809337   41589 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:02:57.809627   41589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:02:57.809677   41589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:02:57.824280   41589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0625 16:02:57.824649   41589 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:02:57.825095   41589 main.go:141] libmachine: Using API Version  1
	I0625 16:02:57.825121   41589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:02:57.825422   41589 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:02:57.825601   41589 main.go:141] libmachine: (ha-674765-m04) Calling .DriverName
	I0625 16:02:57.825750   41589 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:02:57.825766   41589 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHHostname
	I0625 16:02:57.828781   41589 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:57.829170   41589 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:02:57.829197   41589 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:02:57.829380   41589 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHPort
	I0625 16:02:57.829548   41589 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHKeyPath
	I0625 16:02:57.829716   41589 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHUsername
	I0625 16:02:57.829852   41589 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m04/id_rsa Username:docker}
	I0625 16:02:57.909889   41589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:02:57.924541   41589 status.go:257] ha-674765-m04 status: &{Name:ha-674765-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr: exit status 7 (612.977081ms)

                                                
                                                
-- stdout --
	ha-674765
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-674765-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:03:04.682574   41677 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:03:04.682825   41677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:03:04.682834   41677 out.go:304] Setting ErrFile to fd 2...
	I0625 16:03:04.682837   41677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:03:04.683007   41677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:03:04.683214   41677 out.go:298] Setting JSON to false
	I0625 16:03:04.683237   41677 mustload.go:65] Loading cluster: ha-674765
	I0625 16:03:04.683287   41677 notify.go:220] Checking for updates...
	I0625 16:03:04.683601   41677 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:03:04.683619   41677 status.go:255] checking status of ha-674765 ...
	I0625 16:03:04.684071   41677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:03:04.684117   41677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:03:04.703503   41677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40761
	I0625 16:03:04.703898   41677 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:03:04.704456   41677 main.go:141] libmachine: Using API Version  1
	I0625 16:03:04.704478   41677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:03:04.704851   41677 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:03:04.705025   41677 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 16:03:04.706489   41677 status.go:330] ha-674765 host status = "Running" (err=<nil>)
	I0625 16:03:04.706507   41677 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:03:04.706785   41677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:03:04.706815   41677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:03:04.720480   41677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I0625 16:03:04.720789   41677 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:03:04.721206   41677 main.go:141] libmachine: Using API Version  1
	I0625 16:03:04.721231   41677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:03:04.721516   41677 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:03:04.721693   41677 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:03:04.724392   41677 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:03:04.724774   41677 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:03:04.724805   41677 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:03:04.724887   41677 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:03:04.725203   41677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:03:04.725239   41677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:03:04.739446   41677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0625 16:03:04.739830   41677 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:03:04.740219   41677 main.go:141] libmachine: Using API Version  1
	I0625 16:03:04.740237   41677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:03:04.740505   41677 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:03:04.740715   41677 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:03:04.740879   41677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:03:04.740895   41677 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:03:04.743673   41677 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:03:04.744105   41677 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:03:04.744150   41677 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:03:04.744276   41677 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:03:04.744436   41677 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:03:04.744590   41677 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:03:04.744701   41677 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:03:04.831006   41677 ssh_runner.go:195] Run: systemctl --version
	I0625 16:03:04.837255   41677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:03:04.854135   41677 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:03:04.854158   41677 api_server.go:166] Checking apiserver status ...
	I0625 16:03:04.854189   41677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:03:04.870914   41677 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup
	W0625 16:03:04.882360   41677 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:03:04.882394   41677 ssh_runner.go:195] Run: ls
	I0625 16:03:04.887092   41677 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:03:04.891208   41677 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:03:04.891228   41677 status.go:422] ha-674765 apiserver status = Running (err=<nil>)
	I0625 16:03:04.891241   41677 status.go:257] ha-674765 status: &{Name:ha-674765 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:03:04.891265   41677 status.go:255] checking status of ha-674765-m02 ...
	I0625 16:03:04.891550   41677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:03:04.891590   41677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:03:04.906033   41677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44545
	I0625 16:03:04.906426   41677 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:03:04.906937   41677 main.go:141] libmachine: Using API Version  1
	I0625 16:03:04.906957   41677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:03:04.907452   41677 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:03:04.907647   41677 main.go:141] libmachine: (ha-674765-m02) Calling .GetState
	I0625 16:03:04.909222   41677 status.go:330] ha-674765-m02 host status = "Stopped" (err=<nil>)
	I0625 16:03:04.909235   41677 status.go:343] host is not running, skipping remaining checks
	I0625 16:03:04.909242   41677 status.go:257] ha-674765-m02 status: &{Name:ha-674765-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:03:04.909258   41677 status.go:255] checking status of ha-674765-m03 ...
	I0625 16:03:04.909533   41677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:03:04.909572   41677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:03:04.923633   41677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34427
	I0625 16:03:04.923976   41677 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:03:04.924366   41677 main.go:141] libmachine: Using API Version  1
	I0625 16:03:04.924388   41677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:03:04.924631   41677 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:03:04.924824   41677 main.go:141] libmachine: (ha-674765-m03) Calling .GetState
	I0625 16:03:04.926216   41677 status.go:330] ha-674765-m03 host status = "Running" (err=<nil>)
	I0625 16:03:04.926233   41677 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:03:04.926532   41677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:03:04.926566   41677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:03:04.940662   41677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34885
	I0625 16:03:04.941005   41677 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:03:04.941447   41677 main.go:141] libmachine: Using API Version  1
	I0625 16:03:04.941481   41677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:03:04.941765   41677 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:03:04.941971   41677 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 16:03:04.944570   41677 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:03:04.945002   41677 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:03:04.945033   41677 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:03:04.945163   41677 host.go:66] Checking if "ha-674765-m03" exists ...
	I0625 16:03:04.945432   41677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:03:04.945461   41677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:03:04.959407   41677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0625 16:03:04.959792   41677 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:03:04.960304   41677 main.go:141] libmachine: Using API Version  1
	I0625 16:03:04.960324   41677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:03:04.960708   41677 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:03:04.960885   41677 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 16:03:04.961093   41677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:03:04.961119   41677 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 16:03:04.963812   41677 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:03:04.964260   41677 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:03:04.964283   41677 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:03:04.964407   41677 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 16:03:04.964559   41677 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 16:03:04.964723   41677 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 16:03:04.964857   41677 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 16:03:05.050782   41677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:03:05.065447   41677 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:03:05.065470   41677 api_server.go:166] Checking apiserver status ...
	I0625 16:03:05.065504   41677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:03:05.080046   41677 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup
	W0625 16:03:05.090246   41677 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:03:05.090299   41677 ssh_runner.go:195] Run: ls
	I0625 16:03:05.094687   41677 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:03:05.100121   41677 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:03:05.100138   41677 status.go:422] ha-674765-m03 apiserver status = Running (err=<nil>)
	I0625 16:03:05.100146   41677 status.go:257] ha-674765-m03 status: &{Name:ha-674765-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:03:05.100163   41677 status.go:255] checking status of ha-674765-m04 ...
	I0625 16:03:05.100468   41677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:03:05.100499   41677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:03:05.114781   41677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39185
	I0625 16:03:05.115173   41677 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:03:05.115667   41677 main.go:141] libmachine: Using API Version  1
	I0625 16:03:05.115688   41677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:03:05.115987   41677 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:03:05.116177   41677 main.go:141] libmachine: (ha-674765-m04) Calling .GetState
	I0625 16:03:05.117702   41677 status.go:330] ha-674765-m04 host status = "Running" (err=<nil>)
	I0625 16:03:05.117715   41677 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:03:05.118025   41677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:03:05.118076   41677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:03:05.132901   41677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I0625 16:03:05.133216   41677 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:03:05.133618   41677 main.go:141] libmachine: Using API Version  1
	I0625 16:03:05.133641   41677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:03:05.133917   41677 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:03:05.134094   41677 main.go:141] libmachine: (ha-674765-m04) Calling .GetIP
	I0625 16:03:05.137005   41677 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:03:05.137383   41677 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:03:05.137400   41677 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:03:05.137542   41677 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:03:05.137817   41677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:03:05.137859   41677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:03:05.152332   41677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I0625 16:03:05.152700   41677 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:03:05.153117   41677 main.go:141] libmachine: Using API Version  1
	I0625 16:03:05.153133   41677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:03:05.153421   41677 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:03:05.153577   41677 main.go:141] libmachine: (ha-674765-m04) Calling .DriverName
	I0625 16:03:05.153771   41677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:03:05.153791   41677 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHHostname
	I0625 16:03:05.156552   41677 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:03:05.156938   41677 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:03:05.156969   41677 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:03:05.157081   41677 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHPort
	I0625 16:03:05.157239   41677 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHKeyPath
	I0625 16:03:05.157409   41677 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHUsername
	I0625 16:03:05.157545   41677 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m04/id_rsa Username:docker}
	I0625 16:03:05.238459   41677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:03:05.254956   41677 status.go:257] ha-674765-m04 status: &{Name:ha-674765-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-674765 -n ha-674765
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-674765 logs -n 25: (1.352889834s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m03:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765:/home/docker/cp-test_ha-674765-m03_ha-674765.txt                       |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765 sudo cat                                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m03_ha-674765.txt                                 |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m03:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m02:/home/docker/cp-test_ha-674765-m03_ha-674765-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m02 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m03_ha-674765-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m03:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04:/home/docker/cp-test_ha-674765-m03_ha-674765-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m04 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m03_ha-674765-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-674765 cp testdata/cp-test.txt                                                | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2213486447/001/cp-test_ha-674765-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765:/home/docker/cp-test_ha-674765-m04_ha-674765.txt                       |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765 sudo cat                                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m04_ha-674765.txt                                 |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m02:/home/docker/cp-test_ha-674765-m04_ha-674765-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m02 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m04_ha-674765-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03:/home/docker/cp-test_ha-674765-m04_ha-674765-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m03 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m04_ha-674765-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-674765 node stop m02 -v=7                                                     | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-674765 node start m02 -v=7                                                    | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 16:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/25 15:55:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0625 15:55:24.665579   36162 out.go:291] Setting OutFile to fd 1 ...
	I0625 15:55:24.665814   36162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:55:24.665822   36162 out.go:304] Setting ErrFile to fd 2...
	I0625 15:55:24.665826   36162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:55:24.665992   36162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 15:55:24.666568   36162 out.go:298] Setting JSON to false
	I0625 15:55:24.667432   36162 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5869,"bootTime":1719325056,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 15:55:24.667481   36162 start.go:139] virtualization: kvm guest
	I0625 15:55:24.669441   36162 out.go:177] * [ha-674765] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0625 15:55:24.671072   36162 out.go:177]   - MINIKUBE_LOCATION=19128
	I0625 15:55:24.671130   36162 notify.go:220] Checking for updates...
	I0625 15:55:24.673413   36162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 15:55:24.674621   36162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:55:24.675912   36162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:55:24.677153   36162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0625 15:55:24.678419   36162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0625 15:55:24.679894   36162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 15:55:24.712722   36162 out.go:177] * Using the kvm2 driver based on user configuration
	I0625 15:55:24.714064   36162 start.go:297] selected driver: kvm2
	I0625 15:55:24.714080   36162 start.go:901] validating driver "kvm2" against <nil>
	I0625 15:55:24.714097   36162 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0625 15:55:24.714793   36162 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 15:55:24.714863   36162 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19128-13846/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0625 15:55:24.728271   36162 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0625 15:55:24.728309   36162 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0625 15:55:24.728479   36162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 15:55:24.728536   36162 cni.go:84] Creating CNI manager for ""
	I0625 15:55:24.728549   36162 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0625 15:55:24.728554   36162 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0625 15:55:24.728604   36162 start.go:340] cluster config:
	{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0625 15:55:24.728681   36162 iso.go:125] acquiring lock: {Name:mk76df652d5e768afc73443035d5ecb8b75ed16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 15:55:24.730321   36162 out.go:177] * Starting "ha-674765" primary control-plane node in "ha-674765" cluster
	I0625 15:55:24.731585   36162 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 15:55:24.731613   36162 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0625 15:55:24.731623   36162 cache.go:56] Caching tarball of preloaded images
	I0625 15:55:24.731701   36162 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 15:55:24.731711   36162 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0625 15:55:24.732023   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:55:24.732062   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json: {Name:mke8b11320ef2be457ca4f9c0954f95e94f8e488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:24.732217   36162 start.go:360] acquireMachinesLock for ha-674765: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 15:55:24.732244   36162 start.go:364] duration metric: took 14.976µs to acquireMachinesLock for "ha-674765"
	I0625 15:55:24.732259   36162 start.go:93] Provisioning new machine with config: &{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:55:24.732320   36162 start.go:125] createHost starting for "" (driver="kvm2")
	I0625 15:55:24.734603   36162 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0625 15:55:24.734725   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:55:24.734760   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:55:24.747979   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44221
	I0625 15:55:24.748409   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:55:24.748974   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:55:24.748995   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:55:24.749268   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:55:24.749434   36162 main.go:141] libmachine: (ha-674765) Calling .GetMachineName
	I0625 15:55:24.749539   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:24.749674   36162 start.go:159] libmachine.API.Create for "ha-674765" (driver="kvm2")
	I0625 15:55:24.749698   36162 client.go:168] LocalClient.Create starting
	I0625 15:55:24.749736   36162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem
	I0625 15:55:24.749770   36162 main.go:141] libmachine: Decoding PEM data...
	I0625 15:55:24.749788   36162 main.go:141] libmachine: Parsing certificate...
	I0625 15:55:24.749857   36162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem
	I0625 15:55:24.749880   36162 main.go:141] libmachine: Decoding PEM data...
	I0625 15:55:24.749897   36162 main.go:141] libmachine: Parsing certificate...
	I0625 15:55:24.749931   36162 main.go:141] libmachine: Running pre-create checks...
	I0625 15:55:24.749943   36162 main.go:141] libmachine: (ha-674765) Calling .PreCreateCheck
	I0625 15:55:24.750218   36162 main.go:141] libmachine: (ha-674765) Calling .GetConfigRaw
	I0625 15:55:24.750575   36162 main.go:141] libmachine: Creating machine...
	I0625 15:55:24.750588   36162 main.go:141] libmachine: (ha-674765) Calling .Create
	I0625 15:55:24.750681   36162 main.go:141] libmachine: (ha-674765) Creating KVM machine...
	I0625 15:55:24.751783   36162 main.go:141] libmachine: (ha-674765) DBG | found existing default KVM network
	I0625 15:55:24.752430   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:24.752298   36185 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091e0}
	I0625 15:55:24.752448   36162 main.go:141] libmachine: (ha-674765) DBG | created network xml: 
	I0625 15:55:24.752460   36162 main.go:141] libmachine: (ha-674765) DBG | <network>
	I0625 15:55:24.752472   36162 main.go:141] libmachine: (ha-674765) DBG |   <name>mk-ha-674765</name>
	I0625 15:55:24.752485   36162 main.go:141] libmachine: (ha-674765) DBG |   <dns enable='no'/>
	I0625 15:55:24.752495   36162 main.go:141] libmachine: (ha-674765) DBG |   
	I0625 15:55:24.752507   36162 main.go:141] libmachine: (ha-674765) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0625 15:55:24.752517   36162 main.go:141] libmachine: (ha-674765) DBG |     <dhcp>
	I0625 15:55:24.752542   36162 main.go:141] libmachine: (ha-674765) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0625 15:55:24.752566   36162 main.go:141] libmachine: (ha-674765) DBG |     </dhcp>
	I0625 15:55:24.752574   36162 main.go:141] libmachine: (ha-674765) DBG |   </ip>
	I0625 15:55:24.752581   36162 main.go:141] libmachine: (ha-674765) DBG |   
	I0625 15:55:24.752586   36162 main.go:141] libmachine: (ha-674765) DBG | </network>
	I0625 15:55:24.752593   36162 main.go:141] libmachine: (ha-674765) DBG | 
	I0625 15:55:24.757461   36162 main.go:141] libmachine: (ha-674765) DBG | trying to create private KVM network mk-ha-674765 192.168.39.0/24...
	I0625 15:55:24.820245   36162 main.go:141] libmachine: (ha-674765) DBG | private KVM network mk-ha-674765 192.168.39.0/24 created
	I0625 15:55:24.820274   36162 main.go:141] libmachine: (ha-674765) Setting up store path in /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765 ...
	I0625 15:55:24.820294   36162 main.go:141] libmachine: (ha-674765) Building disk image from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso
	I0625 15:55:24.820314   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:24.820252   36185 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:55:24.820390   36162 main.go:141] libmachine: (ha-674765) Downloading /home/jenkins/minikube-integration/19128-13846/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso...
	I0625 15:55:25.050812   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:25.050696   36185 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa...
	I0625 15:55:25.288789   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:25.288649   36185 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/ha-674765.rawdisk...
	I0625 15:55:25.288820   36162 main.go:141] libmachine: (ha-674765) DBG | Writing magic tar header
	I0625 15:55:25.288833   36162 main.go:141] libmachine: (ha-674765) DBG | Writing SSH key tar header
	I0625 15:55:25.288847   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:25.288760   36185 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765 ...
	I0625 15:55:25.288868   36162 main.go:141] libmachine: (ha-674765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765
	I0625 15:55:25.288876   36162 main.go:141] libmachine: (ha-674765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines
	I0625 15:55:25.288884   36162 main.go:141] libmachine: (ha-674765) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765 (perms=drwx------)
	I0625 15:55:25.288894   36162 main.go:141] libmachine: (ha-674765) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines (perms=drwxr-xr-x)
	I0625 15:55:25.288900   36162 main.go:141] libmachine: (ha-674765) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube (perms=drwxr-xr-x)
	I0625 15:55:25.288906   36162 main.go:141] libmachine: (ha-674765) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846 (perms=drwxrwxr-x)
	I0625 15:55:25.288911   36162 main.go:141] libmachine: (ha-674765) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0625 15:55:25.288920   36162 main.go:141] libmachine: (ha-674765) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0625 15:55:25.288927   36162 main.go:141] libmachine: (ha-674765) Creating domain...
	I0625 15:55:25.288939   36162 main.go:141] libmachine: (ha-674765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:55:25.288955   36162 main.go:141] libmachine: (ha-674765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846
	I0625 15:55:25.288965   36162 main.go:141] libmachine: (ha-674765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0625 15:55:25.288972   36162 main.go:141] libmachine: (ha-674765) DBG | Checking permissions on dir: /home/jenkins
	I0625 15:55:25.288980   36162 main.go:141] libmachine: (ha-674765) DBG | Checking permissions on dir: /home
	I0625 15:55:25.288987   36162 main.go:141] libmachine: (ha-674765) DBG | Skipping /home - not owner
	I0625 15:55:25.289917   36162 main.go:141] libmachine: (ha-674765) define libvirt domain using xml: 
	I0625 15:55:25.289940   36162 main.go:141] libmachine: (ha-674765) <domain type='kvm'>
	I0625 15:55:25.289962   36162 main.go:141] libmachine: (ha-674765)   <name>ha-674765</name>
	I0625 15:55:25.289976   36162 main.go:141] libmachine: (ha-674765)   <memory unit='MiB'>2200</memory>
	I0625 15:55:25.290008   36162 main.go:141] libmachine: (ha-674765)   <vcpu>2</vcpu>
	I0625 15:55:25.290030   36162 main.go:141] libmachine: (ha-674765)   <features>
	I0625 15:55:25.290045   36162 main.go:141] libmachine: (ha-674765)     <acpi/>
	I0625 15:55:25.290061   36162 main.go:141] libmachine: (ha-674765)     <apic/>
	I0625 15:55:25.290070   36162 main.go:141] libmachine: (ha-674765)     <pae/>
	I0625 15:55:25.290081   36162 main.go:141] libmachine: (ha-674765)     
	I0625 15:55:25.290091   36162 main.go:141] libmachine: (ha-674765)   </features>
	I0625 15:55:25.290102   36162 main.go:141] libmachine: (ha-674765)   <cpu mode='host-passthrough'>
	I0625 15:55:25.290112   36162 main.go:141] libmachine: (ha-674765)   
	I0625 15:55:25.290123   36162 main.go:141] libmachine: (ha-674765)   </cpu>
	I0625 15:55:25.290138   36162 main.go:141] libmachine: (ha-674765)   <os>
	I0625 15:55:25.290151   36162 main.go:141] libmachine: (ha-674765)     <type>hvm</type>
	I0625 15:55:25.290163   36162 main.go:141] libmachine: (ha-674765)     <boot dev='cdrom'/>
	I0625 15:55:25.290173   36162 main.go:141] libmachine: (ha-674765)     <boot dev='hd'/>
	I0625 15:55:25.290186   36162 main.go:141] libmachine: (ha-674765)     <bootmenu enable='no'/>
	I0625 15:55:25.290193   36162 main.go:141] libmachine: (ha-674765)   </os>
	I0625 15:55:25.290199   36162 main.go:141] libmachine: (ha-674765)   <devices>
	I0625 15:55:25.290206   36162 main.go:141] libmachine: (ha-674765)     <disk type='file' device='cdrom'>
	I0625 15:55:25.290214   36162 main.go:141] libmachine: (ha-674765)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/boot2docker.iso'/>
	I0625 15:55:25.290222   36162 main.go:141] libmachine: (ha-674765)       <target dev='hdc' bus='scsi'/>
	I0625 15:55:25.290227   36162 main.go:141] libmachine: (ha-674765)       <readonly/>
	I0625 15:55:25.290245   36162 main.go:141] libmachine: (ha-674765)     </disk>
	I0625 15:55:25.290253   36162 main.go:141] libmachine: (ha-674765)     <disk type='file' device='disk'>
	I0625 15:55:25.290259   36162 main.go:141] libmachine: (ha-674765)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0625 15:55:25.290268   36162 main.go:141] libmachine: (ha-674765)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/ha-674765.rawdisk'/>
	I0625 15:55:25.290273   36162 main.go:141] libmachine: (ha-674765)       <target dev='hda' bus='virtio'/>
	I0625 15:55:25.290281   36162 main.go:141] libmachine: (ha-674765)     </disk>
	I0625 15:55:25.290285   36162 main.go:141] libmachine: (ha-674765)     <interface type='network'>
	I0625 15:55:25.290295   36162 main.go:141] libmachine: (ha-674765)       <source network='mk-ha-674765'/>
	I0625 15:55:25.290304   36162 main.go:141] libmachine: (ha-674765)       <model type='virtio'/>
	I0625 15:55:25.290312   36162 main.go:141] libmachine: (ha-674765)     </interface>
	I0625 15:55:25.290322   36162 main.go:141] libmachine: (ha-674765)     <interface type='network'>
	I0625 15:55:25.290328   36162 main.go:141] libmachine: (ha-674765)       <source network='default'/>
	I0625 15:55:25.290335   36162 main.go:141] libmachine: (ha-674765)       <model type='virtio'/>
	I0625 15:55:25.290340   36162 main.go:141] libmachine: (ha-674765)     </interface>
	I0625 15:55:25.290346   36162 main.go:141] libmachine: (ha-674765)     <serial type='pty'>
	I0625 15:55:25.290351   36162 main.go:141] libmachine: (ha-674765)       <target port='0'/>
	I0625 15:55:25.290357   36162 main.go:141] libmachine: (ha-674765)     </serial>
	I0625 15:55:25.290362   36162 main.go:141] libmachine: (ha-674765)     <console type='pty'>
	I0625 15:55:25.290367   36162 main.go:141] libmachine: (ha-674765)       <target type='serial' port='0'/>
	I0625 15:55:25.290374   36162 main.go:141] libmachine: (ha-674765)     </console>
	I0625 15:55:25.290379   36162 main.go:141] libmachine: (ha-674765)     <rng model='virtio'>
	I0625 15:55:25.290387   36162 main.go:141] libmachine: (ha-674765)       <backend model='random'>/dev/random</backend>
	I0625 15:55:25.290390   36162 main.go:141] libmachine: (ha-674765)     </rng>
	I0625 15:55:25.290397   36162 main.go:141] libmachine: (ha-674765)     
	I0625 15:55:25.290402   36162 main.go:141] libmachine: (ha-674765)     
	I0625 15:55:25.290422   36162 main.go:141] libmachine: (ha-674765)   </devices>
	I0625 15:55:25.290441   36162 main.go:141] libmachine: (ha-674765) </domain>
	I0625 15:55:25.290492   36162 main.go:141] libmachine: (ha-674765) 
	I0625 15:55:25.294419   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:e3:7e:66 in network default
	I0625 15:55:25.294939   36162 main.go:141] libmachine: (ha-674765) Ensuring networks are active...
	I0625 15:55:25.294974   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:25.295556   36162 main.go:141] libmachine: (ha-674765) Ensuring network default is active
	I0625 15:55:25.295817   36162 main.go:141] libmachine: (ha-674765) Ensuring network mk-ha-674765 is active
	I0625 15:55:25.296305   36162 main.go:141] libmachine: (ha-674765) Getting domain xml...
	I0625 15:55:25.296924   36162 main.go:141] libmachine: (ha-674765) Creating domain...
	I0625 15:55:26.449225   36162 main.go:141] libmachine: (ha-674765) Waiting to get IP...
	I0625 15:55:26.450173   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:26.450538   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:26.450575   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:26.450528   36185 retry.go:31] will retry after 222.087964ms: waiting for machine to come up
	I0625 15:55:26.673822   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:26.674220   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:26.674256   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:26.674178   36185 retry.go:31] will retry after 287.859085ms: waiting for machine to come up
	I0625 15:55:26.963685   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:26.964090   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:26.964118   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:26.964047   36185 retry.go:31] will retry after 424.000535ms: waiting for machine to come up
	I0625 15:55:27.389554   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:27.389984   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:27.390007   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:27.389946   36185 retry.go:31] will retry after 387.926466ms: waiting for machine to come up
	I0625 15:55:27.779437   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:27.779809   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:27.779829   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:27.779786   36185 retry.go:31] will retry after 561.030334ms: waiting for machine to come up
	I0625 15:55:28.342538   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:28.342974   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:28.342999   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:28.342938   36185 retry.go:31] will retry after 584.411363ms: waiting for machine to come up
	I0625 15:55:28.928603   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:28.928954   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:28.928978   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:28.928906   36185 retry.go:31] will retry after 1.187786363s: waiting for machine to come up
	I0625 15:55:30.118698   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:30.119085   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:30.119113   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:30.119029   36185 retry.go:31] will retry after 1.349507736s: waiting for machine to come up
	I0625 15:55:31.470570   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:31.470992   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:31.471019   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:31.470961   36185 retry.go:31] will retry after 1.622865794s: waiting for machine to come up
	I0625 15:55:33.095647   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:33.095979   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:33.096027   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:33.095945   36185 retry.go:31] will retry after 2.243945522s: waiting for machine to come up
	I0625 15:55:35.341661   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:35.342056   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:35.342081   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:35.342028   36185 retry.go:31] will retry after 2.325430801s: waiting for machine to come up
	I0625 15:55:37.670562   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:37.670939   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:37.670967   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:37.670902   36185 retry.go:31] will retry after 3.014906519s: waiting for machine to come up
	I0625 15:55:40.686901   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:40.687334   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:40.687359   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:40.687272   36185 retry.go:31] will retry after 3.1399809s: waiting for machine to come up
	I0625 15:55:43.830396   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:43.830740   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find current IP address of domain ha-674765 in network mk-ha-674765
	I0625 15:55:43.830762   36162 main.go:141] libmachine: (ha-674765) DBG | I0625 15:55:43.830697   36185 retry.go:31] will retry after 4.710057228s: waiting for machine to come up
	I0625 15:55:48.545128   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.545528   36162 main.go:141] libmachine: (ha-674765) Found IP for machine: 192.168.39.128
	I0625 15:55:48.545552   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has current primary IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.545560   36162 main.go:141] libmachine: (ha-674765) Reserving static IP address...
	I0625 15:55:48.545852   36162 main.go:141] libmachine: (ha-674765) DBG | unable to find host DHCP lease matching {name: "ha-674765", mac: "52:54:00:6e:3a:48", ip: "192.168.39.128"} in network mk-ha-674765
	I0625 15:55:48.613796   36162 main.go:141] libmachine: (ha-674765) DBG | Getting to WaitForSSH function...
	I0625 15:55:48.613824   36162 main.go:141] libmachine: (ha-674765) Reserved static IP address: 192.168.39.128
	I0625 15:55:48.613838   36162 main.go:141] libmachine: (ha-674765) Waiting for SSH to be available...
	I0625 15:55:48.616086   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.616408   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:48.616433   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.616588   36162 main.go:141] libmachine: (ha-674765) DBG | Using SSH client type: external
	I0625 15:55:48.616613   36162 main.go:141] libmachine: (ha-674765) DBG | Using SSH private key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa (-rw-------)
	I0625 15:55:48.616651   36162 main.go:141] libmachine: (ha-674765) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0625 15:55:48.616671   36162 main.go:141] libmachine: (ha-674765) DBG | About to run SSH command:
	I0625 15:55:48.616687   36162 main.go:141] libmachine: (ha-674765) DBG | exit 0
	I0625 15:55:48.741955   36162 main.go:141] libmachine: (ha-674765) DBG | SSH cmd err, output: <nil>: 
	I0625 15:55:48.742241   36162 main.go:141] libmachine: (ha-674765) KVM machine creation complete!
	I0625 15:55:48.742529   36162 main.go:141] libmachine: (ha-674765) Calling .GetConfigRaw
	I0625 15:55:48.743022   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:48.743198   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:48.743336   36162 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0625 15:55:48.743350   36162 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 15:55:48.744495   36162 main.go:141] libmachine: Detecting operating system of created instance...
	I0625 15:55:48.744510   36162 main.go:141] libmachine: Waiting for SSH to be available...
	I0625 15:55:48.744525   36162 main.go:141] libmachine: Getting to WaitForSSH function...
	I0625 15:55:48.744535   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:48.746567   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.746928   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:48.746955   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.747081   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:48.747237   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:48.747396   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:48.747624   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:48.747780   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:55:48.747953   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 15:55:48.747963   36162 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0625 15:55:48.853464   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 15:55:48.853495   36162 main.go:141] libmachine: Detecting the provisioner...
	I0625 15:55:48.853502   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:48.856395   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.856736   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:48.856773   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.856914   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:48.857123   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:48.857372   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:48.857530   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:48.857693   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:55:48.857891   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 15:55:48.857903   36162 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0625 15:55:48.966886   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0625 15:55:48.967012   36162 main.go:141] libmachine: found compatible host: buildroot
	I0625 15:55:48.967031   36162 main.go:141] libmachine: Provisioning with buildroot...
	I0625 15:55:48.967052   36162 main.go:141] libmachine: (ha-674765) Calling .GetMachineName
	I0625 15:55:48.967275   36162 buildroot.go:166] provisioning hostname "ha-674765"
	I0625 15:55:48.967301   36162 main.go:141] libmachine: (ha-674765) Calling .GetMachineName
	I0625 15:55:48.967499   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:48.969799   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.970086   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:48.970127   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:48.970284   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:48.970446   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:48.970616   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:48.970726   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:48.970871   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:55:48.971070   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 15:55:48.971084   36162 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-674765 && echo "ha-674765" | sudo tee /etc/hostname
	I0625 15:55:49.092063   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-674765
	
	I0625 15:55:49.092088   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:49.094515   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.095167   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:49.095194   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.095608   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:49.095807   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:49.095962   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:49.096058   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:49.096270   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:55:49.096433   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 15:55:49.096449   36162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-674765' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-674765/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-674765' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 15:55:49.210753   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 15:55:49.210781   36162 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 15:55:49.210818   36162 buildroot.go:174] setting up certificates
	I0625 15:55:49.210834   36162 provision.go:84] configureAuth start
	I0625 15:55:49.210860   36162 main.go:141] libmachine: (ha-674765) Calling .GetMachineName
	I0625 15:55:49.211116   36162 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 15:55:49.213411   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.213698   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:49.213726   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.213825   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:49.215829   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.216199   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:49.216226   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.216377   36162 provision.go:143] copyHostCerts
	I0625 15:55:49.216405   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 15:55:49.216447   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem, removing ...
	I0625 15:55:49.216456   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 15:55:49.216513   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 15:55:49.216590   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 15:55:49.216607   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem, removing ...
	I0625 15:55:49.216613   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 15:55:49.216641   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 15:55:49.216693   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 15:55:49.216708   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem, removing ...
	I0625 15:55:49.216714   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 15:55:49.216733   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 15:55:49.216789   36162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.ha-674765 san=[127.0.0.1 192.168.39.128 ha-674765 localhost minikube]
	I0625 15:55:49.483969   36162 provision.go:177] copyRemoteCerts
	I0625 15:55:49.484017   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 15:55:49.484037   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:49.486572   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.486879   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:49.486908   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.487050   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:49.487215   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:49.487366   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:49.487461   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:55:49.572233   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0625 15:55:49.572290   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 15:55:49.595865   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0625 15:55:49.595923   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0625 15:55:49.618380   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0625 15:55:49.618431   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0625 15:55:49.640840   36162 provision.go:87] duration metric: took 429.993244ms to configureAuth
	I0625 15:55:49.640859   36162 buildroot.go:189] setting minikube options for container-runtime
	I0625 15:55:49.641037   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:55:49.641163   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:49.643407   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.643711   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:49.643740   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.643940   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:49.644183   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:49.644344   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:49.644447   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:49.644601   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:55:49.644751   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 15:55:49.644767   36162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 15:55:49.901508   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 15:55:49.901537   36162 main.go:141] libmachine: Checking connection to Docker...
	I0625 15:55:49.901549   36162 main.go:141] libmachine: (ha-674765) Calling .GetURL
	I0625 15:55:49.902994   36162 main.go:141] libmachine: (ha-674765) DBG | Using libvirt version 6000000
	I0625 15:55:49.905144   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.905442   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:49.905463   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.905614   36162 main.go:141] libmachine: Docker is up and running!
	I0625 15:55:49.905635   36162 main.go:141] libmachine: Reticulating splines...
	I0625 15:55:49.905641   36162 client.go:171] duration metric: took 25.155932528s to LocalClient.Create
	I0625 15:55:49.905658   36162 start.go:167] duration metric: took 25.15598501s to libmachine.API.Create "ha-674765"
	I0625 15:55:49.905668   36162 start.go:293] postStartSetup for "ha-674765" (driver="kvm2")
	I0625 15:55:49.905676   36162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 15:55:49.905691   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:49.905900   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 15:55:49.905925   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:49.907752   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.908050   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:49.908082   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:49.908190   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:49.908355   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:49.908493   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:49.908623   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:55:49.992762   36162 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 15:55:49.996757   36162 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 15:55:49.996775   36162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 15:55:49.996826   36162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 15:55:49.996903   36162 filesync.go:149] local asset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> 212392.pem in /etc/ssl/certs
	I0625 15:55:49.996913   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /etc/ssl/certs/212392.pem
	I0625 15:55:49.996999   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0625 15:55:50.006422   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /etc/ssl/certs/212392.pem (1708 bytes)
	I0625 15:55:50.029248   36162 start.go:296] duration metric: took 123.570932ms for postStartSetup
	I0625 15:55:50.029287   36162 main.go:141] libmachine: (ha-674765) Calling .GetConfigRaw
	I0625 15:55:50.029897   36162 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 15:55:50.032220   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.032534   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:50.032570   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.032767   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:55:50.032948   36162 start.go:128] duration metric: took 25.300618567s to createHost
	I0625 15:55:50.032967   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:50.034984   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.035267   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:50.035305   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.035424   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:50.035597   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:50.035746   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:50.035866   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:50.036010   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:55:50.036155   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 15:55:50.036168   36162 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0625 15:55:50.142867   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719330950.113203034
	
	I0625 15:55:50.142890   36162 fix.go:216] guest clock: 1719330950.113203034
	I0625 15:55:50.142896   36162 fix.go:229] Guest: 2024-06-25 15:55:50.113203034 +0000 UTC Remote: 2024-06-25 15:55:50.032959072 +0000 UTC m=+25.400781994 (delta=80.243962ms)
	I0625 15:55:50.142916   36162 fix.go:200] guest clock delta is within tolerance: 80.243962ms
	I0625 15:55:50.142922   36162 start.go:83] releasing machines lock for "ha-674765", held for 25.410670041s
	I0625 15:55:50.142946   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:50.143188   36162 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 15:55:50.145581   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.145896   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:50.145924   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.146053   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:50.146576   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:50.146741   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:55:50.146792   36162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 15:55:50.146843   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:50.146956   36162 ssh_runner.go:195] Run: cat /version.json
	I0625 15:55:50.146973   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:55:50.149378   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.149515   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.149676   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:50.149694   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.149849   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:50.149921   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:50.149954   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:50.149994   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:50.150139   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:50.150173   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:55:50.150273   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:55:50.150326   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:55:50.150458   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:55:50.150592   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:55:50.227474   36162 ssh_runner.go:195] Run: systemctl --version
	I0625 15:55:50.250228   36162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 15:55:50.409021   36162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0625 15:55:50.415168   36162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 15:55:50.415220   36162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 15:55:50.434896   36162 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0625 15:55:50.434910   36162 start.go:494] detecting cgroup driver to use...
	I0625 15:55:50.434948   36162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 15:55:50.455185   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 15:55:50.471242   36162 docker.go:217] disabling cri-docker service (if available) ...
	I0625 15:55:50.471279   36162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 15:55:50.484823   36162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 15:55:50.499278   36162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 15:55:50.617798   36162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 15:55:50.762365   36162 docker.go:233] disabling docker service ...
	I0625 15:55:50.762423   36162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 15:55:50.777064   36162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 15:55:50.790038   36162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 15:55:50.917709   36162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 15:55:51.024372   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 15:55:51.038561   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 15:55:51.056392   36162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0625 15:55:51.056450   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:55:51.066822   36162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 15:55:51.066864   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:55:51.077158   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:55:51.087212   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:55:51.097401   36162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 15:55:51.107728   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:55:51.117862   36162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:55:51.134067   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:55:51.144255   36162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 15:55:51.153405   36162 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0625 15:55:51.153467   36162 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0625 15:55:51.165743   36162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 15:55:51.174905   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:55:51.278267   36162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 15:55:51.415511   36162 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 15:55:51.415587   36162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 15:55:51.420302   36162 start.go:562] Will wait 60s for crictl version
	I0625 15:55:51.420365   36162 ssh_runner.go:195] Run: which crictl
	I0625 15:55:51.424005   36162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 15:55:51.461545   36162 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 15:55:51.461604   36162 ssh_runner.go:195] Run: crio --version
	I0625 15:55:51.488841   36162 ssh_runner.go:195] Run: crio --version
	I0625 15:55:51.518881   36162 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0625 15:55:51.520141   36162 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 15:55:51.522528   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:51.522845   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:55:51.522865   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:55:51.523098   36162 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0625 15:55:51.527146   36162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 15:55:51.540086   36162 kubeadm.go:877] updating cluster {Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0625 15:55:51.540176   36162 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 15:55:51.540212   36162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 15:55:51.572747   36162 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0625 15:55:51.572795   36162 ssh_runner.go:195] Run: which lz4
	I0625 15:55:51.576575   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0625 15:55:51.576668   36162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0625 15:55:51.580841   36162 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0625 15:55:51.580862   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0625 15:55:52.957341   36162 crio.go:462] duration metric: took 1.380702907s to copy over tarball
	I0625 15:55:52.957422   36162 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0625 15:55:54.998908   36162 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.041453222s)
	I0625 15:55:54.998937   36162 crio.go:469] duration metric: took 2.041574258s to extract the tarball
	I0625 15:55:54.998944   36162 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0625 15:55:55.036762   36162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 15:55:55.081347   36162 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 15:55:55.081367   36162 cache_images.go:84] Images are preloaded, skipping loading
	I0625 15:55:55.081373   36162 kubeadm.go:928] updating node { 192.168.39.128 8443 v1.30.2 crio true true} ...
	I0625 15:55:55.081470   36162 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-674765 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 15:55:55.081530   36162 ssh_runner.go:195] Run: crio config
	I0625 15:55:55.126079   36162 cni.go:84] Creating CNI manager for ""
	I0625 15:55:55.126096   36162 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0625 15:55:55.126104   36162 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0625 15:55:55.126123   36162 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-674765 NodeName:ha-674765 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0625 15:55:55.126238   36162 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-674765"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0625 15:55:55.126259   36162 kube-vip.go:115] generating kube-vip config ...
	I0625 15:55:55.126302   36162 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0625 15:55:55.143906   36162 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0625 15:55:55.143999   36162 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0625 15:55:55.144047   36162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0625 15:55:55.153974   36162 binaries.go:44] Found k8s binaries, skipping transfer
	I0625 15:55:55.154040   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0625 15:55:55.163602   36162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0625 15:55:55.179582   36162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 15:55:55.195114   36162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0625 15:55:55.210668   36162 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0625 15:55:55.226274   36162 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0625 15:55:55.229838   36162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 15:55:55.241546   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:55:55.345411   36162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 15:55:55.361122   36162 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765 for IP: 192.168.39.128
	I0625 15:55:55.361147   36162 certs.go:194] generating shared ca certs ...
	I0625 15:55:55.361166   36162 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:55.361339   36162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 15:55:55.361428   36162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 15:55:55.361447   36162 certs.go:256] generating profile certs ...
	I0625 15:55:55.361516   36162 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key
	I0625 15:55:55.361534   36162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.crt with IP's: []
	I0625 15:55:55.481396   36162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.crt ...
	I0625 15:55:55.481423   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.crt: {Name:mk634c6de4b44b2ccd54b0092cddfbae0f8e98b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:55.481599   36162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key ...
	I0625 15:55:55.481614   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key: {Name:mk4d2d01e3f027181db556966898190cb645a4de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:55.481711   36162 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.222299a4
	I0625 15:55:55.481731   36162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.222299a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.254]
	I0625 15:55:55.692389   36162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.222299a4 ...
	I0625 15:55:55.692417   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.222299a4: {Name:mkc1cda21cad476115bb27b306008e1b17c2836a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:55.692580   36162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.222299a4 ...
	I0625 15:55:55.692596   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.222299a4: {Name:mk91e0f955e3f071068275bc216d2a474b5df152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:55.692690   36162 certs.go:381] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.222299a4 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt
	I0625 15:55:55.692777   36162 certs.go:385] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.222299a4 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key
	I0625 15:55:55.692854   36162 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key
	I0625 15:55:55.692874   36162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt with IP's: []
	I0625 15:55:55.894014   36162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt ...
	I0625 15:55:55.894043   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt: {Name:mk73ccd38d492e2b2476dc85013c84204bb41e27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:55.894211   36162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key ...
	I0625 15:55:55.894225   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key: {Name:mkd5e59badd38772aa6667a35929b726353b412d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:55:55.894317   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0625 15:55:55.894338   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0625 15:55:55.894353   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0625 15:55:55.894369   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0625 15:55:55.894388   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0625 15:55:55.894404   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0625 15:55:55.894421   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0625 15:55:55.894441   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0625 15:55:55.894519   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem (1338 bytes)
	W0625 15:55:55.894573   36162 certs.go:480] ignoring /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239_empty.pem, impossibly tiny 0 bytes
	I0625 15:55:55.894595   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 15:55:55.894633   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 15:55:55.894665   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 15:55:55.894700   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 15:55:55.894753   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem (1708 bytes)
	I0625 15:55:55.894790   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem -> /usr/share/ca-certificates/21239.pem
	I0625 15:55:55.894810   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /usr/share/ca-certificates/212392.pem
	I0625 15:55:55.894828   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:55:55.895449   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 15:55:55.920997   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 15:55:55.943502   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 15:55:55.966165   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 15:55:55.989049   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0625 15:55:56.011606   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0625 15:55:56.034379   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 15:55:56.056631   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0625 15:55:56.078948   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem --> /usr/share/ca-certificates/21239.pem (1338 bytes)
	I0625 15:55:56.101031   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /usr/share/ca-certificates/212392.pem (1708 bytes)
	I0625 15:55:56.123517   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 15:55:56.154563   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0625 15:55:56.171469   36162 ssh_runner.go:195] Run: openssl version
	I0625 15:55:56.177275   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 15:55:56.187653   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:55:56.192219   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:55:56.192267   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:55:56.201111   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 15:55:56.211415   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21239.pem && ln -fs /usr/share/ca-certificates/21239.pem /etc/ssl/certs/21239.pem"
	I0625 15:55:56.221456   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21239.pem
	I0625 15:55:56.225783   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 15:55:56.225813   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21239.pem
	I0625 15:55:56.231245   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21239.pem /etc/ssl/certs/51391683.0"
	I0625 15:55:56.241405   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212392.pem && ln -fs /usr/share/ca-certificates/212392.pem /etc/ssl/certs/212392.pem"
	I0625 15:55:56.251823   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212392.pem
	I0625 15:55:56.256042   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 15:55:56.256085   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212392.pem
	I0625 15:55:56.261335   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/212392.pem /etc/ssl/certs/3ec20f2e.0"
	I0625 15:55:56.271455   36162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 15:55:56.275322   36162 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0625 15:55:56.275368   36162 kubeadm.go:391] StartCluster: {Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 15:55:56.275437   36162 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0625 15:55:56.275490   36162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0625 15:55:56.312272   36162 cri.go:89] found id: ""
	I0625 15:55:56.312349   36162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0625 15:55:56.321955   36162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0625 15:55:56.331073   36162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0625 15:55:56.340161   36162 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0625 15:55:56.340174   36162 kubeadm.go:156] found existing configuration files:
	
	I0625 15:55:56.340211   36162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0625 15:55:56.348919   36162 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0625 15:55:56.348954   36162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0625 15:55:56.357980   36162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0625 15:55:56.366690   36162 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0625 15:55:56.366722   36162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0625 15:55:56.375527   36162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0625 15:55:56.384050   36162 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0625 15:55:56.384092   36162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0625 15:55:56.392784   36162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0625 15:55:56.401224   36162 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0625 15:55:56.401253   36162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0625 15:55:56.410192   36162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0625 15:55:56.630938   36162 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0625 15:56:07.250800   36162 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0625 15:56:07.250886   36162 kubeadm.go:309] [preflight] Running pre-flight checks
	I0625 15:56:07.250948   36162 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0625 15:56:07.251032   36162 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0625 15:56:07.251166   36162 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0625 15:56:07.251289   36162 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0625 15:56:07.252642   36162 out.go:204]   - Generating certificates and keys ...
	I0625 15:56:07.252707   36162 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0625 15:56:07.252763   36162 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0625 15:56:07.252817   36162 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0625 15:56:07.252874   36162 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0625 15:56:07.252926   36162 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0625 15:56:07.252969   36162 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0625 15:56:07.253011   36162 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0625 15:56:07.253102   36162 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-674765 localhost] and IPs [192.168.39.128 127.0.0.1 ::1]
	I0625 15:56:07.253144   36162 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0625 15:56:07.253287   36162 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-674765 localhost] and IPs [192.168.39.128 127.0.0.1 ::1]
	I0625 15:56:07.253398   36162 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0625 15:56:07.253497   36162 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0625 15:56:07.253566   36162 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0625 15:56:07.253661   36162 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0625 15:56:07.253710   36162 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0625 15:56:07.253755   36162 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0625 15:56:07.253800   36162 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0625 15:56:07.253881   36162 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0625 15:56:07.253968   36162 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0625 15:56:07.254082   36162 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0625 15:56:07.254144   36162 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0625 15:56:07.255497   36162 out.go:204]   - Booting up control plane ...
	I0625 15:56:07.255581   36162 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0625 15:56:07.255668   36162 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0625 15:56:07.255754   36162 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0625 15:56:07.255866   36162 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0625 15:56:07.255984   36162 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0625 15:56:07.256035   36162 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0625 15:56:07.256187   36162 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0625 15:56:07.256253   36162 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0625 15:56:07.256342   36162 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.414724ms
	I0625 15:56:07.256437   36162 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0625 15:56:07.256518   36162 kubeadm.go:309] [api-check] The API server is healthy after 6.136500068s
	I0625 15:56:07.256635   36162 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0625 15:56:07.256775   36162 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0625 15:56:07.256860   36162 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0625 15:56:07.257066   36162 kubeadm.go:309] [mark-control-plane] Marking the node ha-674765 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0625 15:56:07.257148   36162 kubeadm.go:309] [bootstrap-token] Using token: fawvb8.q5jg5dbcsoua7fro
	I0625 15:56:07.258304   36162 out.go:204]   - Configuring RBAC rules ...
	I0625 15:56:07.258405   36162 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0625 15:56:07.258498   36162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0625 15:56:07.258620   36162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0625 15:56:07.258764   36162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0625 15:56:07.258902   36162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0625 15:56:07.258983   36162 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0625 15:56:07.259084   36162 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0625 15:56:07.259127   36162 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0625 15:56:07.259175   36162 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0625 15:56:07.259182   36162 kubeadm.go:309] 
	I0625 15:56:07.259247   36162 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0625 15:56:07.259265   36162 kubeadm.go:309] 
	I0625 15:56:07.259322   36162 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0625 15:56:07.259328   36162 kubeadm.go:309] 
	I0625 15:56:07.259355   36162 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0625 15:56:07.259404   36162 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0625 15:56:07.259444   36162 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0625 15:56:07.259453   36162 kubeadm.go:309] 
	I0625 15:56:07.259509   36162 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0625 15:56:07.259517   36162 kubeadm.go:309] 
	I0625 15:56:07.259556   36162 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0625 15:56:07.259562   36162 kubeadm.go:309] 
	I0625 15:56:07.259628   36162 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0625 15:56:07.259724   36162 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0625 15:56:07.259818   36162 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0625 15:56:07.259827   36162 kubeadm.go:309] 
	I0625 15:56:07.259924   36162 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0625 15:56:07.260029   36162 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0625 15:56:07.260038   36162 kubeadm.go:309] 
	I0625 15:56:07.260129   36162 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fawvb8.q5jg5dbcsoua7fro \
	I0625 15:56:07.260247   36162 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:df4523a4334c80aff4a7c2fc7b4a73691744a675a28cdb3d4468287f693ab03d \
	I0625 15:56:07.260276   36162 kubeadm.go:309] 	--control-plane 
	I0625 15:56:07.260285   36162 kubeadm.go:309] 
	I0625 15:56:07.260383   36162 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0625 15:56:07.260393   36162 kubeadm.go:309] 
	I0625 15:56:07.260490   36162 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fawvb8.q5jg5dbcsoua7fro \
	I0625 15:56:07.260653   36162 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:df4523a4334c80aff4a7c2fc7b4a73691744a675a28cdb3d4468287f693ab03d 
	I0625 15:56:07.260669   36162 cni.go:84] Creating CNI manager for ""
	I0625 15:56:07.260676   36162 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0625 15:56:07.261939   36162 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0625 15:56:07.262963   36162 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0625 15:56:07.268838   36162 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0625 15:56:07.268854   36162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0625 15:56:07.288846   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0625 15:56:07.635446   36162 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0625 15:56:07.635529   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:07.635533   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-674765 minikube.k8s.io/updated_at=2024_06_25T15_56_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b minikube.k8s.io/name=ha-674765 minikube.k8s.io/primary=true
	I0625 15:56:07.838524   36162 ops.go:34] apiserver oom_adj: -16
	I0625 15:56:07.838594   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:08.339101   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:08.838997   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:09.338626   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:09.839575   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:10.339604   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:10.839529   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:11.339182   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:11.839203   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:12.338597   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:12.839579   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:13.339019   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:13.839408   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:14.339441   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:14.839398   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:15.338795   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:15.839652   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:16.339589   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:16.839361   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:17.338701   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:17.839196   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:18.339345   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0625 15:56:18.455382   36162 kubeadm.go:1107] duration metric: took 10.819926294s to wait for elevateKubeSystemPrivileges
	W0625 15:56:18.455428   36162 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0625 15:56:18.455438   36162 kubeadm.go:393] duration metric: took 22.180073428s to StartCluster
	I0625 15:56:18.455457   36162 settings.go:142] acquiring lock: {Name:mk38d7db80b40da56857d65b8e7da05700cdb9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:56:18.455531   36162 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:56:18.456169   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/kubeconfig: {Name:mk71a37176bd7deadd1f1cd3c756fe56f3b0810d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:56:18.456356   36162 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:56:18.456387   36162 start.go:240] waiting for startup goroutines ...
	I0625 15:56:18.456363   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0625 15:56:18.456394   36162 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0625 15:56:18.456490   36162 addons.go:69] Setting storage-provisioner=true in profile "ha-674765"
	I0625 15:56:18.456515   36162 addons.go:69] Setting default-storageclass=true in profile "ha-674765"
	I0625 15:56:18.456549   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:56:18.456568   36162 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-674765"
	I0625 15:56:18.456519   36162 addons.go:234] Setting addon storage-provisioner=true in "ha-674765"
	I0625 15:56:18.456625   36162 host.go:66] Checking if "ha-674765" exists ...
	I0625 15:56:18.456973   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:18.456985   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:18.456999   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:18.457006   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:18.471583   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40779
	I0625 15:56:18.471871   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37905
	I0625 15:56:18.472124   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:18.472338   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:18.472619   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:18.472642   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:18.472791   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:18.472810   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:18.472957   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:18.473060   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:18.473190   36162 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 15:56:18.473505   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:18.473537   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:18.475310   36162 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:56:18.475528   36162 kapi.go:59] client config for ha-674765: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.crt", KeyFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key", CAFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0625 15:56:18.475961   36162 cert_rotation.go:137] Starting client certificate rotation controller
	I0625 15:56:18.476120   36162 addons.go:234] Setting addon default-storageclass=true in "ha-674765"
	I0625 15:56:18.476149   36162 host.go:66] Checking if "ha-674765" exists ...
	I0625 15:56:18.476379   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:18.476415   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:18.488078   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39921
	I0625 15:56:18.488555   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:18.489023   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:18.489048   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:18.489415   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:18.489610   36162 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 15:56:18.489779   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42229
	I0625 15:56:18.490212   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:18.490711   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:18.490733   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:18.491094   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:18.491359   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:56:18.491665   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:18.491725   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:18.493423   36162 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0625 15:56:18.494662   36162 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0625 15:56:18.494680   36162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0625 15:56:18.494696   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:56:18.497391   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:18.497771   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:56:18.497791   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:18.497945   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:56:18.498107   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:56:18.498223   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:56:18.498345   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:56:18.505779   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
	I0625 15:56:18.506135   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:18.508727   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:18.508755   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:18.509106   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:18.509286   36162 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 15:56:18.510835   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:56:18.511037   36162 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0625 15:56:18.511051   36162 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0625 15:56:18.511063   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:56:18.513438   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:18.513770   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:56:18.513793   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:18.514023   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:56:18.514201   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:56:18.514350   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:56:18.514513   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:56:18.576559   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0625 15:56:18.647420   36162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0625 15:56:18.677361   36162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0625 15:56:18.958492   36162 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0625 15:56:19.177252   36162 main.go:141] libmachine: Making call to close driver server
	I0625 15:56:19.177278   36162 main.go:141] libmachine: (ha-674765) Calling .Close
	I0625 15:56:19.177253   36162 main.go:141] libmachine: Making call to close driver server
	I0625 15:56:19.177344   36162 main.go:141] libmachine: (ha-674765) Calling .Close
	I0625 15:56:19.177546   36162 main.go:141] libmachine: (ha-674765) DBG | Closing plugin on server side
	I0625 15:56:19.177583   36162 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:56:19.177588   36162 main.go:141] libmachine: (ha-674765) DBG | Closing plugin on server side
	I0625 15:56:19.177596   36162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:56:19.177607   36162 main.go:141] libmachine: Making call to close driver server
	I0625 15:56:19.177616   36162 main.go:141] libmachine: (ha-674765) Calling .Close
	I0625 15:56:19.177687   36162 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:56:19.177701   36162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:56:19.177714   36162 main.go:141] libmachine: Making call to close driver server
	I0625 15:56:19.177724   36162 main.go:141] libmachine: (ha-674765) Calling .Close
	I0625 15:56:19.177923   36162 main.go:141] libmachine: (ha-674765) DBG | Closing plugin on server side
	I0625 15:56:19.177933   36162 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:56:19.177945   36162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:56:19.177950   36162 main.go:141] libmachine: (ha-674765) DBG | Closing plugin on server side
	I0625 15:56:19.177981   36162 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:56:19.178003   36162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:56:19.178106   36162 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0625 15:56:19.178119   36162 round_trippers.go:469] Request Headers:
	I0625 15:56:19.178130   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:56:19.178135   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:56:19.188599   36162 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0625 15:56:19.189264   36162 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0625 15:56:19.189282   36162 round_trippers.go:469] Request Headers:
	I0625 15:56:19.189294   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:56:19.189303   36162 round_trippers.go:473]     Content-Type: application/json
	I0625 15:56:19.189307   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:56:19.198099   36162 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0625 15:56:19.198461   36162 main.go:141] libmachine: Making call to close driver server
	I0625 15:56:19.198493   36162 main.go:141] libmachine: (ha-674765) Calling .Close
	I0625 15:56:19.198709   36162 main.go:141] libmachine: Successfully made call to close driver server
	I0625 15:56:19.198725   36162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 15:56:19.200299   36162 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0625 15:56:19.201450   36162 addons.go:510] duration metric: took 745.059817ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0625 15:56:19.201475   36162 start.go:245] waiting for cluster config update ...
	I0625 15:56:19.201485   36162 start.go:254] writing updated cluster config ...
	I0625 15:56:19.202840   36162 out.go:177] 
	I0625 15:56:19.204101   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:56:19.204186   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:56:19.205733   36162 out.go:177] * Starting "ha-674765-m02" control-plane node in "ha-674765" cluster
	I0625 15:56:19.206970   36162 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 15:56:19.206988   36162 cache.go:56] Caching tarball of preloaded images
	I0625 15:56:19.207057   36162 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 15:56:19.207068   36162 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0625 15:56:19.207125   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:56:19.207261   36162 start.go:360] acquireMachinesLock for ha-674765-m02: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 15:56:19.207296   36162 start.go:364] duration metric: took 19.689µs to acquireMachinesLock for "ha-674765-m02"
	I0625 15:56:19.207312   36162 start.go:93] Provisioning new machine with config: &{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:56:19.207375   36162 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0625 15:56:19.208756   36162 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0625 15:56:19.208812   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:19.208833   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:19.222743   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I0625 15:56:19.223095   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:19.223522   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:19.223544   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:19.223907   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:19.224089   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetMachineName
	I0625 15:56:19.224247   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:19.224390   36162 start.go:159] libmachine.API.Create for "ha-674765" (driver="kvm2")
	I0625 15:56:19.224411   36162 client.go:168] LocalClient.Create starting
	I0625 15:56:19.224444   36162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem
	I0625 15:56:19.224483   36162 main.go:141] libmachine: Decoding PEM data...
	I0625 15:56:19.224510   36162 main.go:141] libmachine: Parsing certificate...
	I0625 15:56:19.224575   36162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem
	I0625 15:56:19.224602   36162 main.go:141] libmachine: Decoding PEM data...
	I0625 15:56:19.224618   36162 main.go:141] libmachine: Parsing certificate...
	I0625 15:56:19.224643   36162 main.go:141] libmachine: Running pre-create checks...
	I0625 15:56:19.224655   36162 main.go:141] libmachine: (ha-674765-m02) Calling .PreCreateCheck
	I0625 15:56:19.224859   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetConfigRaw
	I0625 15:56:19.225299   36162 main.go:141] libmachine: Creating machine...
	I0625 15:56:19.225327   36162 main.go:141] libmachine: (ha-674765-m02) Calling .Create
	I0625 15:56:19.225446   36162 main.go:141] libmachine: (ha-674765-m02) Creating KVM machine...
	I0625 15:56:19.226578   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found existing default KVM network
	I0625 15:56:19.226766   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found existing private KVM network mk-ha-674765
	I0625 15:56:19.226902   36162 main.go:141] libmachine: (ha-674765-m02) Setting up store path in /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02 ...
	I0625 15:56:19.226926   36162 main.go:141] libmachine: (ha-674765-m02) Building disk image from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso
	I0625 15:56:19.226976   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:19.226876   36561 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:56:19.227083   36162 main.go:141] libmachine: (ha-674765-m02) Downloading /home/jenkins/minikube-integration/19128-13846/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso...
	I0625 15:56:19.447297   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:19.447171   36561 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa...
	I0625 15:56:19.975551   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:19.975447   36561 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/ha-674765-m02.rawdisk...
	I0625 15:56:19.975577   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Writing magic tar header
	I0625 15:56:19.975587   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Writing SSH key tar header
	I0625 15:56:19.975594   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:19.975564   36561 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02 ...
	I0625 15:56:19.975697   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02
	I0625 15:56:19.975737   36162 main.go:141] libmachine: (ha-674765-m02) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02 (perms=drwx------)
	I0625 15:56:19.975753   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines
	I0625 15:56:19.975782   36162 main.go:141] libmachine: (ha-674765-m02) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines (perms=drwxr-xr-x)
	I0625 15:56:19.975805   36162 main.go:141] libmachine: (ha-674765-m02) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube (perms=drwxr-xr-x)
	I0625 15:56:19.975817   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:56:19.975831   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846
	I0625 15:56:19.975845   36162 main.go:141] libmachine: (ha-674765-m02) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846 (perms=drwxrwxr-x)
	I0625 15:56:19.975857   36162 main.go:141] libmachine: (ha-674765-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0625 15:56:19.975869   36162 main.go:141] libmachine: (ha-674765-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0625 15:56:19.975881   36162 main.go:141] libmachine: (ha-674765-m02) Creating domain...
	I0625 15:56:19.975896   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0625 15:56:19.975908   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Checking permissions on dir: /home/jenkins
	I0625 15:56:19.975921   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Checking permissions on dir: /home
	I0625 15:56:19.975932   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Skipping /home - not owner
	I0625 15:56:19.976781   36162 main.go:141] libmachine: (ha-674765-m02) define libvirt domain using xml: 
	I0625 15:56:19.976802   36162 main.go:141] libmachine: (ha-674765-m02) <domain type='kvm'>
	I0625 15:56:19.976811   36162 main.go:141] libmachine: (ha-674765-m02)   <name>ha-674765-m02</name>
	I0625 15:56:19.976821   36162 main.go:141] libmachine: (ha-674765-m02)   <memory unit='MiB'>2200</memory>
	I0625 15:56:19.976830   36162 main.go:141] libmachine: (ha-674765-m02)   <vcpu>2</vcpu>
	I0625 15:56:19.976841   36162 main.go:141] libmachine: (ha-674765-m02)   <features>
	I0625 15:56:19.976850   36162 main.go:141] libmachine: (ha-674765-m02)     <acpi/>
	I0625 15:56:19.976858   36162 main.go:141] libmachine: (ha-674765-m02)     <apic/>
	I0625 15:56:19.976870   36162 main.go:141] libmachine: (ha-674765-m02)     <pae/>
	I0625 15:56:19.976877   36162 main.go:141] libmachine: (ha-674765-m02)     
	I0625 15:56:19.976888   36162 main.go:141] libmachine: (ha-674765-m02)   </features>
	I0625 15:56:19.976904   36162 main.go:141] libmachine: (ha-674765-m02)   <cpu mode='host-passthrough'>
	I0625 15:56:19.976915   36162 main.go:141] libmachine: (ha-674765-m02)   
	I0625 15:56:19.976926   36162 main.go:141] libmachine: (ha-674765-m02)   </cpu>
	I0625 15:56:19.976938   36162 main.go:141] libmachine: (ha-674765-m02)   <os>
	I0625 15:56:19.976948   36162 main.go:141] libmachine: (ha-674765-m02)     <type>hvm</type>
	I0625 15:56:19.976960   36162 main.go:141] libmachine: (ha-674765-m02)     <boot dev='cdrom'/>
	I0625 15:56:19.976976   36162 main.go:141] libmachine: (ha-674765-m02)     <boot dev='hd'/>
	I0625 15:56:19.976989   36162 main.go:141] libmachine: (ha-674765-m02)     <bootmenu enable='no'/>
	I0625 15:56:19.976999   36162 main.go:141] libmachine: (ha-674765-m02)   </os>
	I0625 15:56:19.977010   36162 main.go:141] libmachine: (ha-674765-m02)   <devices>
	I0625 15:56:19.977022   36162 main.go:141] libmachine: (ha-674765-m02)     <disk type='file' device='cdrom'>
	I0625 15:56:19.977039   36162 main.go:141] libmachine: (ha-674765-m02)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/boot2docker.iso'/>
	I0625 15:56:19.977059   36162 main.go:141] libmachine: (ha-674765-m02)       <target dev='hdc' bus='scsi'/>
	I0625 15:56:19.977071   36162 main.go:141] libmachine: (ha-674765-m02)       <readonly/>
	I0625 15:56:19.977081   36162 main.go:141] libmachine: (ha-674765-m02)     </disk>
	I0625 15:56:19.977095   36162 main.go:141] libmachine: (ha-674765-m02)     <disk type='file' device='disk'>
	I0625 15:56:19.977112   36162 main.go:141] libmachine: (ha-674765-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0625 15:56:19.977138   36162 main.go:141] libmachine: (ha-674765-m02)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/ha-674765-m02.rawdisk'/>
	I0625 15:56:19.977156   36162 main.go:141] libmachine: (ha-674765-m02)       <target dev='hda' bus='virtio'/>
	I0625 15:56:19.977166   36162 main.go:141] libmachine: (ha-674765-m02)     </disk>
	I0625 15:56:19.977191   36162 main.go:141] libmachine: (ha-674765-m02)     <interface type='network'>
	I0625 15:56:19.977204   36162 main.go:141] libmachine: (ha-674765-m02)       <source network='mk-ha-674765'/>
	I0625 15:56:19.977214   36162 main.go:141] libmachine: (ha-674765-m02)       <model type='virtio'/>
	I0625 15:56:19.977225   36162 main.go:141] libmachine: (ha-674765-m02)     </interface>
	I0625 15:56:19.977236   36162 main.go:141] libmachine: (ha-674765-m02)     <interface type='network'>
	I0625 15:56:19.977247   36162 main.go:141] libmachine: (ha-674765-m02)       <source network='default'/>
	I0625 15:56:19.977261   36162 main.go:141] libmachine: (ha-674765-m02)       <model type='virtio'/>
	I0625 15:56:19.977273   36162 main.go:141] libmachine: (ha-674765-m02)     </interface>
	I0625 15:56:19.977284   36162 main.go:141] libmachine: (ha-674765-m02)     <serial type='pty'>
	I0625 15:56:19.977296   36162 main.go:141] libmachine: (ha-674765-m02)       <target port='0'/>
	I0625 15:56:19.977305   36162 main.go:141] libmachine: (ha-674765-m02)     </serial>
	I0625 15:56:19.977321   36162 main.go:141] libmachine: (ha-674765-m02)     <console type='pty'>
	I0625 15:56:19.977343   36162 main.go:141] libmachine: (ha-674765-m02)       <target type='serial' port='0'/>
	I0625 15:56:19.977357   36162 main.go:141] libmachine: (ha-674765-m02)     </console>
	I0625 15:56:19.977371   36162 main.go:141] libmachine: (ha-674765-m02)     <rng model='virtio'>
	I0625 15:56:19.977383   36162 main.go:141] libmachine: (ha-674765-m02)       <backend model='random'>/dev/random</backend>
	I0625 15:56:19.977394   36162 main.go:141] libmachine: (ha-674765-m02)     </rng>
	I0625 15:56:19.977403   36162 main.go:141] libmachine: (ha-674765-m02)     
	I0625 15:56:19.977414   36162 main.go:141] libmachine: (ha-674765-m02)     
	I0625 15:56:19.977423   36162 main.go:141] libmachine: (ha-674765-m02)   </devices>
	I0625 15:56:19.977432   36162 main.go:141] libmachine: (ha-674765-m02) </domain>
	I0625 15:56:19.977447   36162 main.go:141] libmachine: (ha-674765-m02) 
	I0625 15:56:19.984916   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:d6:eb:ee in network default
	I0625 15:56:19.985488   36162 main.go:141] libmachine: (ha-674765-m02) Ensuring networks are active...
	I0625 15:56:19.985506   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:19.986270   36162 main.go:141] libmachine: (ha-674765-m02) Ensuring network default is active
	I0625 15:56:19.986621   36162 main.go:141] libmachine: (ha-674765-m02) Ensuring network mk-ha-674765 is active
	I0625 15:56:19.987198   36162 main.go:141] libmachine: (ha-674765-m02) Getting domain xml...
	I0625 15:56:19.987966   36162 main.go:141] libmachine: (ha-674765-m02) Creating domain...
	I0625 15:56:21.179303   36162 main.go:141] libmachine: (ha-674765-m02) Waiting to get IP...
	I0625 15:56:21.180185   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:21.180587   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:21.180639   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:21.180580   36561 retry.go:31] will retry after 282.650658ms: waiting for machine to come up
	I0625 15:56:21.465057   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:21.465535   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:21.465566   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:21.465511   36561 retry.go:31] will retry after 336.945771ms: waiting for machine to come up
	I0625 15:56:21.803843   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:21.804361   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:21.804394   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:21.804310   36561 retry.go:31] will retry after 387.860578ms: waiting for machine to come up
	I0625 15:56:22.193809   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:22.194306   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:22.194337   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:22.194269   36561 retry.go:31] will retry after 505.4586ms: waiting for machine to come up
	I0625 15:56:22.701076   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:22.701551   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:22.701579   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:22.701503   36561 retry.go:31] will retry after 747.446006ms: waiting for machine to come up
	I0625 15:56:23.449951   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:23.450415   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:23.450441   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:23.450342   36561 retry.go:31] will retry after 613.447951ms: waiting for machine to come up
	I0625 15:56:24.064836   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:24.065296   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:24.065313   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:24.065262   36561 retry.go:31] will retry after 903.605792ms: waiting for machine to come up
	I0625 15:56:24.971237   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:24.971676   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:24.971701   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:24.971635   36561 retry.go:31] will retry after 1.047838265s: waiting for machine to come up
	I0625 15:56:26.020788   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:26.021179   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:26.021206   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:26.021135   36561 retry.go:31] will retry after 1.430529445s: waiting for machine to come up
	I0625 15:56:27.453560   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:27.453922   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:27.453946   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:27.453874   36561 retry.go:31] will retry after 2.175772528s: waiting for machine to come up
	I0625 15:56:29.631331   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:29.631893   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:29.631918   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:29.631847   36561 retry.go:31] will retry after 1.836171852s: waiting for machine to come up
	I0625 15:56:31.469626   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:31.470037   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:31.470086   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:31.470020   36561 retry.go:31] will retry after 2.361454491s: waiting for machine to come up
	I0625 15:56:33.834350   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:33.834856   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:33.834879   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:33.834813   36561 retry.go:31] will retry after 4.478470724s: waiting for machine to come up
	I0625 15:56:38.316527   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:38.316937   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find current IP address of domain ha-674765-m02 in network mk-ha-674765
	I0625 15:56:38.316963   36162 main.go:141] libmachine: (ha-674765-m02) DBG | I0625 15:56:38.316900   36561 retry.go:31] will retry after 5.11600979s: waiting for machine to come up
	I0625 15:56:43.435616   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.436057   36162 main.go:141] libmachine: (ha-674765-m02) Found IP for machine: 192.168.39.53
	I0625 15:56:43.436083   36162 main.go:141] libmachine: (ha-674765-m02) Reserving static IP address...
	I0625 15:56:43.436092   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has current primary IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.436463   36162 main.go:141] libmachine: (ha-674765-m02) DBG | unable to find host DHCP lease matching {name: "ha-674765-m02", mac: "52:54:00:10:f4:2d", ip: "192.168.39.53"} in network mk-ha-674765
	I0625 15:56:43.506554   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Getting to WaitForSSH function...
	I0625 15:56:43.506583   36162 main.go:141] libmachine: (ha-674765-m02) Reserved static IP address: 192.168.39.53
	I0625 15:56:43.506596   36162 main.go:141] libmachine: (ha-674765-m02) Waiting for SSH to be available...
	I0625 15:56:43.509263   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.509624   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:43.509649   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.509853   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Using SSH client type: external
	I0625 15:56:43.509877   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa (-rw-------)
	I0625 15:56:43.509906   36162 main.go:141] libmachine: (ha-674765-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0625 15:56:43.509917   36162 main.go:141] libmachine: (ha-674765-m02) DBG | About to run SSH command:
	I0625 15:56:43.509974   36162 main.go:141] libmachine: (ha-674765-m02) DBG | exit 0
	I0625 15:56:43.638837   36162 main.go:141] libmachine: (ha-674765-m02) DBG | SSH cmd err, output: <nil>: 
	I0625 15:56:43.639138   36162 main.go:141] libmachine: (ha-674765-m02) KVM machine creation complete!
	I0625 15:56:43.639371   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetConfigRaw
	I0625 15:56:43.639968   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:43.640166   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:43.640311   36162 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0625 15:56:43.640328   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetState
	I0625 15:56:43.641693   36162 main.go:141] libmachine: Detecting operating system of created instance...
	I0625 15:56:43.641709   36162 main.go:141] libmachine: Waiting for SSH to be available...
	I0625 15:56:43.641716   36162 main.go:141] libmachine: Getting to WaitForSSH function...
	I0625 15:56:43.641724   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:43.644119   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.644499   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:43.644516   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.644712   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:43.644908   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:43.645089   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:43.645204   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:43.645340   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:56:43.645606   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0625 15:56:43.645625   36162 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0625 15:56:43.757512   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 15:56:43.757541   36162 main.go:141] libmachine: Detecting the provisioner...
	I0625 15:56:43.757551   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:43.760543   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.760942   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:43.760965   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.761120   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:43.761298   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:43.761432   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:43.761540   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:43.761659   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:56:43.761861   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0625 15:56:43.761874   36162 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0625 15:56:43.879100   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0625 15:56:43.879182   36162 main.go:141] libmachine: found compatible host: buildroot
	I0625 15:56:43.879191   36162 main.go:141] libmachine: Provisioning with buildroot...
	I0625 15:56:43.879198   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetMachineName
	I0625 15:56:43.879420   36162 buildroot.go:166] provisioning hostname "ha-674765-m02"
	I0625 15:56:43.879450   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetMachineName
	I0625 15:56:43.879603   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:43.882190   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.882586   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:43.882613   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:43.882793   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:43.882966   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:43.883121   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:43.883220   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:43.883387   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:56:43.883588   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0625 15:56:43.883606   36162 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-674765-m02 && echo "ha-674765-m02" | sudo tee /etc/hostname
	I0625 15:56:44.008949   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-674765-m02
	
	I0625 15:56:44.008972   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:44.011836   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.012247   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.012278   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.012472   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:44.012645   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.012804   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.012914   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:44.013049   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:56:44.013219   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0625 15:56:44.013242   36162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-674765-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-674765-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-674765-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 15:56:44.131935   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 15:56:44.131962   36162 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 15:56:44.131976   36162 buildroot.go:174] setting up certificates
	I0625 15:56:44.131985   36162 provision.go:84] configureAuth start
	I0625 15:56:44.131996   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetMachineName
	I0625 15:56:44.132256   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 15:56:44.135231   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.135590   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.135634   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.135776   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:44.138252   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.138732   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.138757   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.138892   36162 provision.go:143] copyHostCerts
	I0625 15:56:44.138922   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 15:56:44.138950   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem, removing ...
	I0625 15:56:44.138959   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 15:56:44.139024   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 15:56:44.139107   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 15:56:44.139141   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem, removing ...
	I0625 15:56:44.139151   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 15:56:44.139194   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 15:56:44.139270   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 15:56:44.139295   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem, removing ...
	I0625 15:56:44.139299   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 15:56:44.139328   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 15:56:44.139382   36162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.ha-674765-m02 san=[127.0.0.1 192.168.39.53 ha-674765-m02 localhost minikube]
	I0625 15:56:44.264356   36162 provision.go:177] copyRemoteCerts
	I0625 15:56:44.264406   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 15:56:44.264426   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:44.267152   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.267510   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.267531   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.267689   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:44.267905   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.268074   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:44.268226   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	I0625 15:56:44.356736   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0625 15:56:44.356805   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 15:56:44.383296   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0625 15:56:44.383365   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0625 15:56:44.408362   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0625 15:56:44.408436   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0625 15:56:44.436178   36162 provision.go:87] duration metric: took 304.180992ms to configureAuth
	I0625 15:56:44.436205   36162 buildroot.go:189] setting minikube options for container-runtime
	I0625 15:56:44.436414   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:56:44.436506   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:44.439256   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.439568   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.439588   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.439775   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:44.439952   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.440094   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.440218   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:44.440327   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:56:44.440477   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0625 15:56:44.440491   36162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 15:56:44.705173   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 15:56:44.705203   36162 main.go:141] libmachine: Checking connection to Docker...
	I0625 15:56:44.705214   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetURL
	I0625 15:56:44.706585   36162 main.go:141] libmachine: (ha-674765-m02) DBG | Using libvirt version 6000000
	I0625 15:56:44.709060   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.709569   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.709596   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.709795   36162 main.go:141] libmachine: Docker is up and running!
	I0625 15:56:44.709819   36162 main.go:141] libmachine: Reticulating splines...
	I0625 15:56:44.709828   36162 client.go:171] duration metric: took 25.485406116s to LocalClient.Create
	I0625 15:56:44.709853   36162 start.go:167] duration metric: took 25.485464391s to libmachine.API.Create "ha-674765"
	I0625 15:56:44.709865   36162 start.go:293] postStartSetup for "ha-674765-m02" (driver="kvm2")
	I0625 15:56:44.709879   36162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 15:56:44.709902   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:44.710129   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 15:56:44.710156   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:44.712436   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.712772   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.712797   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.712982   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:44.713161   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.713312   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:44.713458   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	I0625 15:56:44.801536   36162 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 15:56:44.805686   36162 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 15:56:44.805710   36162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 15:56:44.805779   36162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 15:56:44.805859   36162 filesync.go:149] local asset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> 212392.pem in /etc/ssl/certs
	I0625 15:56:44.805869   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /etc/ssl/certs/212392.pem
	I0625 15:56:44.805944   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0625 15:56:44.815391   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /etc/ssl/certs/212392.pem (1708 bytes)
	I0625 15:56:44.838164   36162 start.go:296] duration metric: took 128.283548ms for postStartSetup
	I0625 15:56:44.838208   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetConfigRaw
	I0625 15:56:44.838767   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 15:56:44.841210   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.841590   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.841617   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.841846   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:56:44.842066   36162 start.go:128] duration metric: took 25.634681289s to createHost
	I0625 15:56:44.842088   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:44.844130   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.844486   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.844513   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.844694   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:44.844859   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.845009   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.845126   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:44.845307   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:56:44.845471   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0625 15:56:44.845488   36162 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0625 15:56:44.959485   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719331004.936443927
	
	I0625 15:56:44.959508   36162 fix.go:216] guest clock: 1719331004.936443927
	I0625 15:56:44.959518   36162 fix.go:229] Guest: 2024-06-25 15:56:44.936443927 +0000 UTC Remote: 2024-06-25 15:56:44.842078261 +0000 UTC m=+80.209901183 (delta=94.365666ms)
	I0625 15:56:44.959542   36162 fix.go:200] guest clock delta is within tolerance: 94.365666ms
	I0625 15:56:44.959549   36162 start.go:83] releasing machines lock for "ha-674765-m02", held for 25.752244408s
	I0625 15:56:44.959580   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:44.959844   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 15:56:44.962408   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.962838   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.962870   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.965323   36162 out.go:177] * Found network options:
	I0625 15:56:44.966887   36162 out.go:177]   - NO_PROXY=192.168.39.128
	W0625 15:56:44.968395   36162 proxy.go:119] fail to check proxy env: Error ip not in block
	I0625 15:56:44.968435   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:44.968940   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:44.969145   36162 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 15:56:44.969199   36162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 15:56:44.969240   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	W0625 15:56:44.969308   36162 proxy.go:119] fail to check proxy env: Error ip not in block
	I0625 15:56:44.969384   36162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 15:56:44.969403   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 15:56:44.972259   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.972467   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.972653   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.972678   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.972795   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:44.972933   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:44.972962   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:44.972979   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.973098   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 15:56:44.973145   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:44.973228   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 15:56:44.973319   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	I0625 15:56:44.973469   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 15:56:44.973610   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	I0625 15:56:45.205451   36162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0625 15:56:45.211847   36162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 15:56:45.211914   36162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 15:56:45.229533   36162 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0625 15:56:45.229564   36162 start.go:494] detecting cgroup driver to use...
	I0625 15:56:45.229628   36162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 15:56:45.247009   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 15:56:45.260421   36162 docker.go:217] disabling cri-docker service (if available) ...
	I0625 15:56:45.260480   36162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 15:56:45.273876   36162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 15:56:45.286958   36162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 15:56:45.403810   36162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 15:56:45.550586   36162 docker.go:233] disabling docker service ...
	I0625 15:56:45.550655   36162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 15:56:45.564489   36162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 15:56:45.576838   36162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 15:56:45.708091   36162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 15:56:45.846107   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 15:56:45.860205   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 15:56:45.879876   36162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0625 15:56:45.879925   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:56:45.891391   36162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 15:56:45.891465   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:56:45.902882   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:56:45.914347   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:56:45.926912   36162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 15:56:45.939261   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:56:45.951330   36162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:56:45.970241   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:56:45.982394   36162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 15:56:45.993515   36162 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0625 15:56:45.993554   36162 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0625 15:56:46.009455   36162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 15:56:46.021074   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:56:46.147216   36162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 15:56:46.283042   36162 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 15:56:46.283099   36162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 15:56:46.288405   36162 start.go:562] Will wait 60s for crictl version
	I0625 15:56:46.288459   36162 ssh_runner.go:195] Run: which crictl
	I0625 15:56:46.292293   36162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 15:56:46.339974   36162 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 15:56:46.340069   36162 ssh_runner.go:195] Run: crio --version
	I0625 15:56:46.374253   36162 ssh_runner.go:195] Run: crio --version
	I0625 15:56:46.403924   36162 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0625 15:56:46.405251   36162 out.go:177]   - env NO_PROXY=192.168.39.128
	I0625 15:56:46.406413   36162 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 15:56:46.409391   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:46.409787   36162 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:56:34 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 15:56:46.409814   36162 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 15:56:46.410095   36162 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0625 15:56:46.415414   36162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 15:56:46.428410   36162 mustload.go:65] Loading cluster: ha-674765
	I0625 15:56:46.428590   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:56:46.428858   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:46.428886   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:46.443673   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36117
	I0625 15:56:46.444052   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:46.444465   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:46.444480   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:46.444814   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:46.444987   36162 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 15:56:46.446627   36162 host.go:66] Checking if "ha-674765" exists ...
	I0625 15:56:46.446893   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:46.446914   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:46.460420   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44941
	I0625 15:56:46.460784   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:46.461162   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:46.461184   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:46.461438   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:46.461643   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:56:46.461809   36162 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765 for IP: 192.168.39.53
	I0625 15:56:46.461821   36162 certs.go:194] generating shared ca certs ...
	I0625 15:56:46.461841   36162 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:56:46.461965   36162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 15:56:46.462017   36162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 15:56:46.462042   36162 certs.go:256] generating profile certs ...
	I0625 15:56:46.462130   36162 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key
	I0625 15:56:46.462158   36162 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.3cf33f8e
	I0625 15:56:46.462178   36162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.3cf33f8e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.53 192.168.39.254]
	I0625 15:56:46.776861   36162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.3cf33f8e ...
	I0625 15:56:46.776891   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.3cf33f8e: {Name:mk63bfac5d652837104707bb3a98a9a6114ad62b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:56:46.777070   36162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.3cf33f8e ...
	I0625 15:56:46.777089   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.3cf33f8e: {Name:mk0954e4ee17ed2229bef891eb165210e12ccf5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:56:46.777190   36162 certs.go:381] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.3cf33f8e -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt
	I0625 15:56:46.777337   36162 certs.go:385] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.3cf33f8e -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key
	I0625 15:56:46.777499   36162 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key
	I0625 15:56:46.777516   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0625 15:56:46.777533   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0625 15:56:46.777550   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0625 15:56:46.777570   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0625 15:56:46.777589   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0625 15:56:46.777607   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0625 15:56:46.777625   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0625 15:56:46.777643   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0625 15:56:46.777701   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem (1338 bytes)
	W0625 15:56:46.777738   36162 certs.go:480] ignoring /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239_empty.pem, impossibly tiny 0 bytes
	I0625 15:56:46.777751   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 15:56:46.777789   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 15:56:46.777820   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 15:56:46.777852   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 15:56:46.777908   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem (1708 bytes)
	I0625 15:56:46.777945   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /usr/share/ca-certificates/212392.pem
	I0625 15:56:46.777965   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:56:46.777983   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem -> /usr/share/ca-certificates/21239.pem
	I0625 15:56:46.778020   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:56:46.780624   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:46.780925   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:56:46.780948   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:46.781144   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:56:46.781339   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:56:46.781501   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:56:46.781649   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:56:46.858845   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0625 15:56:46.864476   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0625 15:56:46.875910   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0625 15:56:46.880109   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0625 15:56:46.890533   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0625 15:56:46.894910   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0625 15:56:46.905170   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0625 15:56:46.909338   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0625 15:56:46.920068   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0625 15:56:46.924246   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0625 15:56:46.934395   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0625 15:56:46.938224   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0625 15:56:46.948370   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 15:56:46.976834   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 15:56:47.009142   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 15:56:47.033961   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 15:56:47.058231   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0625 15:56:47.082360   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0625 15:56:47.106992   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 15:56:47.130587   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0625 15:56:47.153854   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /usr/share/ca-certificates/212392.pem (1708 bytes)
	I0625 15:56:47.176770   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 15:56:47.199826   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem --> /usr/share/ca-certificates/21239.pem (1338 bytes)
	I0625 15:56:47.223519   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0625 15:56:47.240420   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0625 15:56:47.257079   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0625 15:56:47.273371   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0625 15:56:47.289547   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0625 15:56:47.305756   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0625 15:56:47.322911   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0625 15:56:47.339666   36162 ssh_runner.go:195] Run: openssl version
	I0625 15:56:47.345606   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212392.pem && ln -fs /usr/share/ca-certificates/212392.pem /etc/ssl/certs/212392.pem"
	I0625 15:56:47.357087   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212392.pem
	I0625 15:56:47.362012   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 15:56:47.362083   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212392.pem
	I0625 15:56:47.368592   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/212392.pem /etc/ssl/certs/3ec20f2e.0"
	I0625 15:56:47.379518   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 15:56:47.390127   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:56:47.394519   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:56:47.394563   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:56:47.400180   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 15:56:47.410872   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21239.pem && ln -fs /usr/share/ca-certificates/21239.pem /etc/ssl/certs/21239.pem"
	I0625 15:56:47.421558   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21239.pem
	I0625 15:56:47.425788   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 15:56:47.425837   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21239.pem
	I0625 15:56:47.431468   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21239.pem /etc/ssl/certs/51391683.0"
	I0625 15:56:47.441799   36162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 15:56:47.445765   36162 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0625 15:56:47.445832   36162 kubeadm.go:928] updating node {m02 192.168.39.53 8443 v1.30.2 crio true true} ...
	I0625 15:56:47.445939   36162 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-674765-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 15:56:47.445976   36162 kube-vip.go:115] generating kube-vip config ...
	I0625 15:56:47.446018   36162 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0625 15:56:47.463886   36162 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0625 15:56:47.463955   36162 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0625 15:56:47.464005   36162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0625 15:56:47.473844   36162 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0625 15:56:47.473931   36162 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0625 15:56:47.483476   36162 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0625 15:56:47.483503   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0625 15:56:47.483567   36162 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0625 15:56:47.483596   36162 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0625 15:56:47.483574   36162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0625 15:56:47.488437   36162 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0625 15:56:47.488473   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0625 15:56:48.371648   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0625 15:56:48.371718   36162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0625 15:56:48.377117   36162 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0625 15:56:48.377149   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0625 15:56:49.145989   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 15:56:49.161457   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0625 15:56:49.161542   36162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0625 15:56:49.165852   36162 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0625 15:56:49.165886   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0625 15:56:49.573619   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0625 15:56:49.583407   36162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0625 15:56:49.601195   36162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 15:56:49.618903   36162 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0625 15:56:49.637792   36162 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0625 15:56:49.641936   36162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 15:56:49.656165   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:56:49.785739   36162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 15:56:49.803909   36162 host.go:66] Checking if "ha-674765" exists ...
	I0625 15:56:49.804349   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:56:49.804398   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:56:49.818976   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
	I0625 15:56:49.819444   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:56:49.819947   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:56:49.819971   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:56:49.820338   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:56:49.820532   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:56:49.820733   36162 start.go:316] joinCluster: &{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 15:56:49.820832   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0625 15:56:49.820846   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:56:49.823597   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:49.823989   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:56:49.824021   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:56:49.824195   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:56:49.824369   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:56:49.824528   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:56:49.824653   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:56:49.988030   36162 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:56:49.988087   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rsvyh1.iisaun8ql3zel5y7 --discovery-token-ca-cert-hash sha256:df4523a4334c80aff4a7c2fc7b4a73691744a675a28cdb3d4468287f693ab03d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-674765-m02 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I0625 15:57:11.986265   36162 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rsvyh1.iisaun8ql3zel5y7 --discovery-token-ca-cert-hash sha256:df4523a4334c80aff4a7c2fc7b4a73691744a675a28cdb3d4468287f693ab03d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-674765-m02 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (21.998151766s)
	I0625 15:57:11.986295   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0625 15:57:12.562932   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-674765-m02 minikube.k8s.io/updated_at=2024_06_25T15_57_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b minikube.k8s.io/name=ha-674765 minikube.k8s.io/primary=false
	I0625 15:57:12.672103   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-674765-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0625 15:57:12.767521   36162 start.go:318] duration metric: took 22.946781224s to joinCluster
	I0625 15:57:12.767613   36162 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:57:12.767916   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:57:12.768897   36162 out.go:177] * Verifying Kubernetes components...
	I0625 15:57:12.770051   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:57:13.004125   36162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 15:57:13.032881   36162 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:57:13.033081   36162 kapi.go:59] client config for ha-674765: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.crt", KeyFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key", CAFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0625 15:57:13.033137   36162 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.128:8443
	I0625 15:57:13.033307   36162 node_ready.go:35] waiting up to 6m0s for node "ha-674765-m02" to be "Ready" ...
	I0625 15:57:13.033373   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:13.033381   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:13.033388   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:13.033392   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:13.043431   36162 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0625 15:57:13.534410   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:13.534428   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:13.534438   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:13.534441   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:13.538182   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:14.034306   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:14.034326   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:14.034338   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:14.034345   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:14.039446   36162 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0625 15:57:14.533963   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:14.533985   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:14.533992   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:14.533997   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:14.537144   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:15.034450   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:15.034483   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:15.034491   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:15.034494   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:15.037652   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:15.038110   36162 node_ready.go:53] node "ha-674765-m02" has status "Ready":"False"
	I0625 15:57:15.534176   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:15.534194   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:15.534202   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:15.534206   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:15.537432   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:16.034503   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:16.034523   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:16.034531   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:16.034535   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:16.040112   36162 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0625 15:57:16.534069   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:16.534090   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:16.534098   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:16.534102   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:16.537497   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:17.034500   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:17.034522   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:17.034531   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:17.034536   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:17.037757   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:17.038665   36162 node_ready.go:53] node "ha-674765-m02" has status "Ready":"False"
	I0625 15:57:17.533937   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:17.533966   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:17.533978   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:17.533990   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:17.536681   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:18.033555   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:18.033576   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:18.033584   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:18.033588   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:18.037070   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:18.534407   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:18.534427   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:18.534435   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:18.534439   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:18.537330   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:19.033518   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:19.033540   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:19.033550   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:19.033556   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:19.036885   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:19.534060   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:19.534083   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:19.534091   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:19.534094   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:19.537345   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:19.537969   36162 node_ready.go:53] node "ha-674765-m02" has status "Ready":"False"
	I0625 15:57:20.034304   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:20.034323   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.034333   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.034339   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.037226   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:20.534256   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:20.534274   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.534282   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.534286   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.537337   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:20.537996   36162 node_ready.go:49] node "ha-674765-m02" has status "Ready":"True"
	I0625 15:57:20.538014   36162 node_ready.go:38] duration metric: took 7.50469233s for node "ha-674765-m02" to be "Ready" ...
	I0625 15:57:20.538024   36162 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 15:57:20.538088   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:57:20.538099   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.538109   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.538116   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.542271   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:57:20.548231   36162 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-28db5" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:20.548316   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-28db5
	I0625 15:57:20.548326   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.548336   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.548343   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.550570   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:20.551195   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:20.551209   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.551216   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.551221   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.553381   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:20.554110   36162 pod_ready.go:92] pod "coredns-7db6d8ff4d-28db5" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:20.554130   36162 pod_ready.go:81] duration metric: took 5.877818ms for pod "coredns-7db6d8ff4d-28db5" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:20.554142   36162 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-84zkt" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:20.554198   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-84zkt
	I0625 15:57:20.554209   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.554219   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.554226   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.556348   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:20.557071   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:20.557084   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.557091   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.557096   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.559058   36162 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0625 15:57:20.559525   36162 pod_ready.go:92] pod "coredns-7db6d8ff4d-84zkt" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:20.559538   36162 pod_ready.go:81] duration metric: took 5.389642ms for pod "coredns-7db6d8ff4d-84zkt" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:20.559546   36162 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:20.559581   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765
	I0625 15:57:20.559589   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.559595   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.559599   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.561747   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:20.562190   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:20.562201   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.562207   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.562211   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.564120   36162 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0625 15:57:20.564704   36162 pod_ready.go:92] pod "etcd-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:20.564720   36162 pod_ready.go:81] duration metric: took 5.168595ms for pod "etcd-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:20.564729   36162 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:20.564781   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:20.564791   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.564801   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.564808   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.567173   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:20.567735   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:20.567747   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:20.567762   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:20.567769   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:20.570009   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:21.064954   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:21.064981   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:21.064992   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:21.064998   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:21.068724   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:21.069264   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:21.069279   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:21.069286   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:21.069292   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:21.071145   36162 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0625 15:57:21.565723   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:21.565741   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:21.565749   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:21.565753   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:21.568580   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:21.569194   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:21.569209   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:21.569217   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:21.569222   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:21.571774   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:22.065633   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:22.065654   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:22.065662   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:22.065666   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:22.068975   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:22.069634   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:22.069650   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:22.069659   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:22.069665   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:22.072405   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:22.565625   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:22.565647   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:22.565657   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:22.565662   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:22.568873   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:22.569409   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:22.569422   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:22.569431   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:22.569436   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:22.571772   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:22.572258   36162 pod_ready.go:102] pod "etcd-ha-674765-m02" in "kube-system" namespace has status "Ready":"False"
	I0625 15:57:23.065702   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:23.065723   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:23.065731   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:23.065735   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:23.068905   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:23.069772   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:23.069789   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:23.069797   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:23.069802   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:23.072443   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:23.565587   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:23.565606   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:23.565614   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:23.565619   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:23.568586   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:23.569632   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:23.569653   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:23.569663   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:23.569668   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:23.573538   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:24.064876   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:24.064897   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:24.064905   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:24.064911   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:24.068269   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:24.069052   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:24.069065   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:24.069072   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:24.069076   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:24.071471   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:24.564911   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:24.564935   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:24.564947   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:24.564953   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:24.568341   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:24.568952   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:24.568966   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:24.568974   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:24.568979   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:24.571934   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:24.572331   36162 pod_ready.go:102] pod "etcd-ha-674765-m02" in "kube-system" namespace has status "Ready":"False"
	I0625 15:57:25.065911   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:25.065931   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:25.065939   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:25.065943   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:25.068874   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:25.069432   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:25.069447   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:25.069454   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:25.069458   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:25.072150   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:25.565017   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:25.565035   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:25.565043   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:25.565046   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:25.568134   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:25.568746   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:25.568760   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:25.568767   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:25.568772   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:25.571138   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:26.064981   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:26.065002   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.065012   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.065018   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.068072   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:26.068948   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:26.068964   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.068971   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.068974   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.071400   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:26.564852   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:57:26.564873   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.564881   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.564886   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.568031   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:26.568891   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:26.568910   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.568917   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.568922   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.571362   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:26.571905   36162 pod_ready.go:92] pod "etcd-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:26.571922   36162 pod_ready.go:81] duration metric: took 6.007184595s for pod "etcd-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:26.571940   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:26.571993   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765
	I0625 15:57:26.572003   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.572012   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.572021   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.574441   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:26.575212   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:26.575227   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.575233   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.575238   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.577293   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:26.577866   36162 pod_ready.go:92] pod "kube-apiserver-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:26.577884   36162 pod_ready.go:81] duration metric: took 5.936767ms for pod "kube-apiserver-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:26.577895   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:26.577956   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:26.577964   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.577971   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.577979   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.580097   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:26.580708   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:26.580722   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:26.580729   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:26.580734   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:26.582765   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:27.078811   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:27.078837   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:27.078848   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:27.078853   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:27.081973   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:27.082745   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:27.082759   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:27.082766   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:27.082772   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:27.085337   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:27.578151   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:27.578171   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:27.578178   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:27.578182   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:27.581219   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:27.581951   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:27.581967   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:27.581974   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:27.581978   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:27.584824   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:28.078904   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:28.078928   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:28.078938   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:28.078944   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:28.082005   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:28.082825   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:28.082842   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:28.082851   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:28.082858   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:28.085426   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:28.578694   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:28.578716   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:28.578727   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:28.578733   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:28.581575   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:28.582541   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:28.582556   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:28.582566   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:28.582572   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:28.584998   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:28.585482   36162 pod_ready.go:102] pod "kube-apiserver-ha-674765-m02" in "kube-system" namespace has status "Ready":"False"
	I0625 15:57:29.078896   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:29.078916   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:29.078924   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:29.078928   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:29.082136   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:29.083150   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:29.083173   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:29.083182   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:29.083187   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:29.085938   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:29.578152   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:29.578172   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:29.578179   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:29.578182   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:29.580956   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:29.581742   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:29.581764   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:29.581775   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:29.581784   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:29.584418   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.078413   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:57:30.078434   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.078444   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.078453   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.081862   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:30.082598   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:30.082616   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.082626   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.082643   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.085130   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.085634   36162 pod_ready.go:92] pod "kube-apiserver-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:30.085653   36162 pod_ready.go:81] duration metric: took 3.507746266s for pod "kube-apiserver-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.085666   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.085718   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765
	I0625 15:57:30.085727   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.085737   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.085742   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.088893   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:30.090008   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:30.090023   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.090033   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.090039   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.092465   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.093045   36162 pod_ready.go:92] pod "kube-controller-manager-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:30.093068   36162 pod_ready.go:81] duration metric: took 7.394198ms for pod "kube-controller-manager-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.093078   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.093117   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765-m02
	I0625 15:57:30.093126   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.093132   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.093135   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.095802   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.096367   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:30.096379   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.096386   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.096390   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.098214   36162 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0625 15:57:30.098647   36162 pod_ready.go:92] pod "kube-controller-manager-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:30.098661   36162 pod_ready.go:81] duration metric: took 5.577923ms for pod "kube-controller-manager-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.098668   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lsmft" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.098709   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lsmft
	I0625 15:57:30.098716   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.098723   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.098726   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.100989   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.134791   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:30.134806   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.134814   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.134820   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.137029   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.137573   36162 pod_ready.go:92] pod "kube-proxy-lsmft" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:30.137590   36162 pod_ready.go:81] duration metric: took 38.915586ms for pod "kube-proxy-lsmft" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.137600   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rh9n5" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.335009   36162 request.go:629] Waited for 197.354925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rh9n5
	I0625 15:57:30.335063   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rh9n5
	I0625 15:57:30.335070   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.335082   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.335090   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.338543   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:30.534537   36162 request.go:629] Waited for 195.314147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:30.534621   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:30.534631   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.534643   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.534652   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.538384   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:30.539076   36162 pod_ready.go:92] pod "kube-proxy-rh9n5" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:30.539095   36162 pod_ready.go:81] duration metric: took 401.488432ms for pod "kube-proxy-rh9n5" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.539106   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.735247   36162 request.go:629] Waited for 196.079864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765
	I0625 15:57:30.735325   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765
	I0625 15:57:30.735344   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.735369   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.735377   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.738144   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.934342   36162 request.go:629] Waited for 195.252677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:30.934435   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:57:30.934452   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:30.934459   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:30.934463   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:30.936872   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:30.937419   36162 pod_ready.go:92] pod "kube-scheduler-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:30.937438   36162 pod_ready.go:81] duration metric: took 398.324735ms for pod "kube-scheduler-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:30.937446   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:31.134503   36162 request.go:629] Waited for 196.991431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765-m02
	I0625 15:57:31.134579   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765-m02
	I0625 15:57:31.134587   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:31.134597   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:31.134604   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:31.137530   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:31.334415   36162 request.go:629] Waited for 196.279639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:31.334489   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:57:31.334514   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:31.334522   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:31.334527   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:31.337333   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:57:31.338097   36162 pod_ready.go:92] pod "kube-scheduler-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:57:31.338118   36162 pod_ready.go:81] duration metric: took 400.664445ms for pod "kube-scheduler-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:57:31.338132   36162 pod_ready.go:38] duration metric: took 10.800092753s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 15:57:31.338152   36162 api_server.go:52] waiting for apiserver process to appear ...
	I0625 15:57:31.338198   36162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 15:57:31.354959   36162 api_server.go:72] duration metric: took 18.587310981s to wait for apiserver process to appear ...
	I0625 15:57:31.354974   36162 api_server.go:88] waiting for apiserver healthz status ...
	I0625 15:57:31.354989   36162 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0625 15:57:31.360620   36162 api_server.go:279] https://192.168.39.128:8443/healthz returned 200:
	ok
	I0625 15:57:31.360687   36162 round_trippers.go:463] GET https://192.168.39.128:8443/version
	I0625 15:57:31.360700   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:31.360711   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:31.360722   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:31.361509   36162 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0625 15:57:31.361608   36162 api_server.go:141] control plane version: v1.30.2
	I0625 15:57:31.361626   36162 api_server.go:131] duration metric: took 6.646092ms to wait for apiserver health ...
	I0625 15:57:31.361635   36162 system_pods.go:43] waiting for kube-system pods to appear ...
	I0625 15:57:31.534552   36162 request.go:629] Waited for 172.857921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:57:31.534608   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:57:31.534613   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:31.534621   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:31.534624   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:31.540074   36162 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0625 15:57:31.544624   36162 system_pods.go:59] 17 kube-system pods found
	I0625 15:57:31.544648   36162 system_pods.go:61] "coredns-7db6d8ff4d-28db5" [1426e4a3-2f25-47e9-9b28-b23a81a3a19a] Running
	I0625 15:57:31.544653   36162 system_pods.go:61] "coredns-7db6d8ff4d-84zkt" [2f6426f8-a0c4-470c-b2b1-b62fa304c078] Running
	I0625 15:57:31.544658   36162 system_pods.go:61] "etcd-ha-674765" [a8f7d82c-8fc7-4190-99c2-0bedc24d8f4f] Running
	I0625 15:57:31.544661   36162 system_pods.go:61] "etcd-ha-674765-m02" [e3f94832-96fe-4bbf-8c53-86bab692b6a9] Running
	I0625 15:57:31.544664   36162 system_pods.go:61] "kindnet-kkgdq" [cfb408ee-0f73-4537-87fb-fad3d2b1f3f1] Running
	I0625 15:57:31.544667   36162 system_pods.go:61] "kindnet-ntq77" [37736a9f-5b4c-421c-9027-81e961ab8550] Running
	I0625 15:57:31.544670   36162 system_pods.go:61] "kube-apiserver-ha-674765" [594e5a19-d80b-4b26-8c91-a8475fb99630] Running
	I0625 15:57:31.544673   36162 system_pods.go:61] "kube-apiserver-ha-674765-m02" [e00ad102-e252-49e9-82e4-b466ae4eb7b2] Running
	I0625 15:57:31.544676   36162 system_pods.go:61] "kube-controller-manager-ha-674765" [5f4f1e7d-f796-4762-9f33-61755c0daef3] Running
	I0625 15:57:31.544679   36162 system_pods.go:61] "kube-controller-manager-ha-674765-m02" [acb4b5ca-b29e-4866-be68-eb4c6425463d] Running
	I0625 15:57:31.544682   36162 system_pods.go:61] "kube-proxy-lsmft" [fa5d210a-1295-497c-8a24-6a0f0dc941de] Running
	I0625 15:57:31.544684   36162 system_pods.go:61] "kube-proxy-rh9n5" [a0a24539-3168-42cc-93b3-d0b1e283d0bd] Running
	I0625 15:57:31.544687   36162 system_pods.go:61] "kube-scheduler-ha-674765" [2695280a-4dd5-4073-875e-63e5238bd1b7] Running
	I0625 15:57:31.544690   36162 system_pods.go:61] "kube-scheduler-ha-674765-m02" [dc04f489-1084-48d4-8cec-c79ec30e0987] Running
	I0625 15:57:31.544692   36162 system_pods.go:61] "kube-vip-ha-674765" [1d132475-65bb-43d1-9353-12b7be1f311f] Running
	I0625 15:57:31.544695   36162 system_pods.go:61] "kube-vip-ha-674765-m02" [dbde28c7-a109-4a7e-97bb-27576a94d2fe] Running
	I0625 15:57:31.544698   36162 system_pods.go:61] "storage-provisioner" [c227c5cf-2bd6-4ebf-9fdb-09d4229cf421] Running
	I0625 15:57:31.544704   36162 system_pods.go:74] duration metric: took 183.060621ms to wait for pod list to return data ...
	I0625 15:57:31.544714   36162 default_sa.go:34] waiting for default service account to be created ...
	I0625 15:57:31.735105   36162 request.go:629] Waited for 190.327717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0625 15:57:31.735155   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0625 15:57:31.735160   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:31.735167   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:31.735170   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:31.738732   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:31.739002   36162 default_sa.go:45] found service account: "default"
	I0625 15:57:31.739025   36162 default_sa.go:55] duration metric: took 194.303559ms for default service account to be created ...
	I0625 15:57:31.739035   36162 system_pods.go:116] waiting for k8s-apps to be running ...
	I0625 15:57:31.934362   36162 request.go:629] Waited for 195.267283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:57:31.934438   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:57:31.934444   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:31.934451   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:31.934459   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:31.939237   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:57:31.943992   36162 system_pods.go:86] 17 kube-system pods found
	I0625 15:57:31.944014   36162 system_pods.go:89] "coredns-7db6d8ff4d-28db5" [1426e4a3-2f25-47e9-9b28-b23a81a3a19a] Running
	I0625 15:57:31.944020   36162 system_pods.go:89] "coredns-7db6d8ff4d-84zkt" [2f6426f8-a0c4-470c-b2b1-b62fa304c078] Running
	I0625 15:57:31.944024   36162 system_pods.go:89] "etcd-ha-674765" [a8f7d82c-8fc7-4190-99c2-0bedc24d8f4f] Running
	I0625 15:57:31.944028   36162 system_pods.go:89] "etcd-ha-674765-m02" [e3f94832-96fe-4bbf-8c53-86bab692b6a9] Running
	I0625 15:57:31.944031   36162 system_pods.go:89] "kindnet-kkgdq" [cfb408ee-0f73-4537-87fb-fad3d2b1f3f1] Running
	I0625 15:57:31.944035   36162 system_pods.go:89] "kindnet-ntq77" [37736a9f-5b4c-421c-9027-81e961ab8550] Running
	I0625 15:57:31.944044   36162 system_pods.go:89] "kube-apiserver-ha-674765" [594e5a19-d80b-4b26-8c91-a8475fb99630] Running
	I0625 15:57:31.944048   36162 system_pods.go:89] "kube-apiserver-ha-674765-m02" [e00ad102-e252-49e9-82e4-b466ae4eb7b2] Running
	I0625 15:57:31.944052   36162 system_pods.go:89] "kube-controller-manager-ha-674765" [5f4f1e7d-f796-4762-9f33-61755c0daef3] Running
	I0625 15:57:31.944056   36162 system_pods.go:89] "kube-controller-manager-ha-674765-m02" [acb4b5ca-b29e-4866-be68-eb4c6425463d] Running
	I0625 15:57:31.944061   36162 system_pods.go:89] "kube-proxy-lsmft" [fa5d210a-1295-497c-8a24-6a0f0dc941de] Running
	I0625 15:57:31.944065   36162 system_pods.go:89] "kube-proxy-rh9n5" [a0a24539-3168-42cc-93b3-d0b1e283d0bd] Running
	I0625 15:57:31.944068   36162 system_pods.go:89] "kube-scheduler-ha-674765" [2695280a-4dd5-4073-875e-63e5238bd1b7] Running
	I0625 15:57:31.944072   36162 system_pods.go:89] "kube-scheduler-ha-674765-m02" [dc04f489-1084-48d4-8cec-c79ec30e0987] Running
	I0625 15:57:31.944076   36162 system_pods.go:89] "kube-vip-ha-674765" [1d132475-65bb-43d1-9353-12b7be1f311f] Running
	I0625 15:57:31.944079   36162 system_pods.go:89] "kube-vip-ha-674765-m02" [dbde28c7-a109-4a7e-97bb-27576a94d2fe] Running
	I0625 15:57:31.944082   36162 system_pods.go:89] "storage-provisioner" [c227c5cf-2bd6-4ebf-9fdb-09d4229cf421] Running
	I0625 15:57:31.944088   36162 system_pods.go:126] duration metric: took 205.047376ms to wait for k8s-apps to be running ...
	I0625 15:57:31.944097   36162 system_svc.go:44] waiting for kubelet service to be running ....
	I0625 15:57:31.944138   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 15:57:31.960094   36162 system_svc.go:56] duration metric: took 15.988807ms WaitForService to wait for kubelet
	I0625 15:57:31.960116   36162 kubeadm.go:576] duration metric: took 19.192468967s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 15:57:31.960134   36162 node_conditions.go:102] verifying NodePressure condition ...
	I0625 15:57:32.134343   36162 request.go:629] Waited for 174.153112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes
	I0625 15:57:32.134416   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes
	I0625 15:57:32.134427   36162 round_trippers.go:469] Request Headers:
	I0625 15:57:32.134441   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:57:32.134450   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:57:32.137663   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:57:32.138464   36162 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0625 15:57:32.138508   36162 node_conditions.go:123] node cpu capacity is 2
	I0625 15:57:32.138519   36162 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0625 15:57:32.138523   36162 node_conditions.go:123] node cpu capacity is 2
	I0625 15:57:32.138527   36162 node_conditions.go:105] duration metric: took 178.388689ms to run NodePressure ...
	I0625 15:57:32.138538   36162 start.go:240] waiting for startup goroutines ...
	I0625 15:57:32.138559   36162 start.go:254] writing updated cluster config ...
	I0625 15:57:32.140399   36162 out.go:177] 
	I0625 15:57:32.141783   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:57:32.141866   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:57:32.143394   36162 out.go:177] * Starting "ha-674765-m03" control-plane node in "ha-674765" cluster
	I0625 15:57:32.144529   36162 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 15:57:32.144548   36162 cache.go:56] Caching tarball of preloaded images
	I0625 15:57:32.144629   36162 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 15:57:32.144639   36162 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0625 15:57:32.144725   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:57:32.144869   36162 start.go:360] acquireMachinesLock for ha-674765-m03: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 15:57:32.144904   36162 start.go:364] duration metric: took 20.207µs to acquireMachinesLock for "ha-674765-m03"
	I0625 15:57:32.144919   36162 start.go:93] Provisioning new machine with config: &{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:57:32.145000   36162 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0625 15:57:32.146413   36162 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0625 15:57:32.146497   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:57:32.146527   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:57:32.161533   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37297
	I0625 15:57:32.161857   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:57:32.162239   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:57:32.162262   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:57:32.162557   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:57:32.162765   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetMachineName
	I0625 15:57:32.162921   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:32.163059   36162 start.go:159] libmachine.API.Create for "ha-674765" (driver="kvm2")
	I0625 15:57:32.163087   36162 client.go:168] LocalClient.Create starting
	I0625 15:57:32.163121   36162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem
	I0625 15:57:32.163157   36162 main.go:141] libmachine: Decoding PEM data...
	I0625 15:57:32.163185   36162 main.go:141] libmachine: Parsing certificate...
	I0625 15:57:32.163247   36162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem
	I0625 15:57:32.163274   36162 main.go:141] libmachine: Decoding PEM data...
	I0625 15:57:32.163291   36162 main.go:141] libmachine: Parsing certificate...
	I0625 15:57:32.163324   36162 main.go:141] libmachine: Running pre-create checks...
	I0625 15:57:32.163336   36162 main.go:141] libmachine: (ha-674765-m03) Calling .PreCreateCheck
	I0625 15:57:32.163476   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetConfigRaw
	I0625 15:57:32.163843   36162 main.go:141] libmachine: Creating machine...
	I0625 15:57:32.163858   36162 main.go:141] libmachine: (ha-674765-m03) Calling .Create
	I0625 15:57:32.163976   36162 main.go:141] libmachine: (ha-674765-m03) Creating KVM machine...
	I0625 15:57:32.164992   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found existing default KVM network
	I0625 15:57:32.165138   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found existing private KVM network mk-ha-674765
	I0625 15:57:32.165262   36162 main.go:141] libmachine: (ha-674765-m03) Setting up store path in /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03 ...
	I0625 15:57:32.165284   36162 main.go:141] libmachine: (ha-674765-m03) Building disk image from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso
	I0625 15:57:32.165317   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:32.165244   36953 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:57:32.165396   36162 main.go:141] libmachine: (ha-674765-m03) Downloading /home/jenkins/minikube-integration/19128-13846/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso...
	I0625 15:57:32.386670   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:32.386569   36953 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa...
	I0625 15:57:32.699159   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:32.699058   36953 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/ha-674765-m03.rawdisk...
	I0625 15:57:32.699189   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Writing magic tar header
	I0625 15:57:32.699211   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Writing SSH key tar header
	I0625 15:57:32.699223   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:32.699167   36953 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03 ...
	I0625 15:57:32.699269   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03
	I0625 15:57:32.699289   36162 main.go:141] libmachine: (ha-674765-m03) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03 (perms=drwx------)
	I0625 15:57:32.699313   36162 main.go:141] libmachine: (ha-674765-m03) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines (perms=drwxr-xr-x)
	I0625 15:57:32.699332   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines
	I0625 15:57:32.699344   36162 main.go:141] libmachine: (ha-674765-m03) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube (perms=drwxr-xr-x)
	I0625 15:57:32.699369   36162 main.go:141] libmachine: (ha-674765-m03) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846 (perms=drwxrwxr-x)
	I0625 15:57:32.699386   36162 main.go:141] libmachine: (ha-674765-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0625 15:57:32.699400   36162 main.go:141] libmachine: (ha-674765-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0625 15:57:32.699411   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:57:32.699422   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846
	I0625 15:57:32.699431   36162 main.go:141] libmachine: (ha-674765-m03) Creating domain...
	I0625 15:57:32.699463   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0625 15:57:32.699487   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Checking permissions on dir: /home/jenkins
	I0625 15:57:32.699498   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Checking permissions on dir: /home
	I0625 15:57:32.699506   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Skipping /home - not owner
	I0625 15:57:32.700382   36162 main.go:141] libmachine: (ha-674765-m03) define libvirt domain using xml: 
	I0625 15:57:32.700410   36162 main.go:141] libmachine: (ha-674765-m03) <domain type='kvm'>
	I0625 15:57:32.700420   36162 main.go:141] libmachine: (ha-674765-m03)   <name>ha-674765-m03</name>
	I0625 15:57:32.700428   36162 main.go:141] libmachine: (ha-674765-m03)   <memory unit='MiB'>2200</memory>
	I0625 15:57:32.700437   36162 main.go:141] libmachine: (ha-674765-m03)   <vcpu>2</vcpu>
	I0625 15:57:32.700443   36162 main.go:141] libmachine: (ha-674765-m03)   <features>
	I0625 15:57:32.700450   36162 main.go:141] libmachine: (ha-674765-m03)     <acpi/>
	I0625 15:57:32.700461   36162 main.go:141] libmachine: (ha-674765-m03)     <apic/>
	I0625 15:57:32.700472   36162 main.go:141] libmachine: (ha-674765-m03)     <pae/>
	I0625 15:57:32.700481   36162 main.go:141] libmachine: (ha-674765-m03)     
	I0625 15:57:32.700492   36162 main.go:141] libmachine: (ha-674765-m03)   </features>
	I0625 15:57:32.700509   36162 main.go:141] libmachine: (ha-674765-m03)   <cpu mode='host-passthrough'>
	I0625 15:57:32.700518   36162 main.go:141] libmachine: (ha-674765-m03)   
	I0625 15:57:32.700529   36162 main.go:141] libmachine: (ha-674765-m03)   </cpu>
	I0625 15:57:32.700548   36162 main.go:141] libmachine: (ha-674765-m03)   <os>
	I0625 15:57:32.700561   36162 main.go:141] libmachine: (ha-674765-m03)     <type>hvm</type>
	I0625 15:57:32.700571   36162 main.go:141] libmachine: (ha-674765-m03)     <boot dev='cdrom'/>
	I0625 15:57:32.700582   36162 main.go:141] libmachine: (ha-674765-m03)     <boot dev='hd'/>
	I0625 15:57:32.700590   36162 main.go:141] libmachine: (ha-674765-m03)     <bootmenu enable='no'/>
	I0625 15:57:32.700599   36162 main.go:141] libmachine: (ha-674765-m03)   </os>
	I0625 15:57:32.700608   36162 main.go:141] libmachine: (ha-674765-m03)   <devices>
	I0625 15:57:32.700618   36162 main.go:141] libmachine: (ha-674765-m03)     <disk type='file' device='cdrom'>
	I0625 15:57:32.700652   36162 main.go:141] libmachine: (ha-674765-m03)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/boot2docker.iso'/>
	I0625 15:57:32.700673   36162 main.go:141] libmachine: (ha-674765-m03)       <target dev='hdc' bus='scsi'/>
	I0625 15:57:32.700687   36162 main.go:141] libmachine: (ha-674765-m03)       <readonly/>
	I0625 15:57:32.700699   36162 main.go:141] libmachine: (ha-674765-m03)     </disk>
	I0625 15:57:32.700709   36162 main.go:141] libmachine: (ha-674765-m03)     <disk type='file' device='disk'>
	I0625 15:57:32.700722   36162 main.go:141] libmachine: (ha-674765-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0625 15:57:32.700738   36162 main.go:141] libmachine: (ha-674765-m03)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/ha-674765-m03.rawdisk'/>
	I0625 15:57:32.700754   36162 main.go:141] libmachine: (ha-674765-m03)       <target dev='hda' bus='virtio'/>
	I0625 15:57:32.700770   36162 main.go:141] libmachine: (ha-674765-m03)     </disk>
	I0625 15:57:32.700780   36162 main.go:141] libmachine: (ha-674765-m03)     <interface type='network'>
	I0625 15:57:32.700792   36162 main.go:141] libmachine: (ha-674765-m03)       <source network='mk-ha-674765'/>
	I0625 15:57:32.700803   36162 main.go:141] libmachine: (ha-674765-m03)       <model type='virtio'/>
	I0625 15:57:32.700814   36162 main.go:141] libmachine: (ha-674765-m03)     </interface>
	I0625 15:57:32.700825   36162 main.go:141] libmachine: (ha-674765-m03)     <interface type='network'>
	I0625 15:57:32.700839   36162 main.go:141] libmachine: (ha-674765-m03)       <source network='default'/>
	I0625 15:57:32.700848   36162 main.go:141] libmachine: (ha-674765-m03)       <model type='virtio'/>
	I0625 15:57:32.700855   36162 main.go:141] libmachine: (ha-674765-m03)     </interface>
	I0625 15:57:32.700863   36162 main.go:141] libmachine: (ha-674765-m03)     <serial type='pty'>
	I0625 15:57:32.700873   36162 main.go:141] libmachine: (ha-674765-m03)       <target port='0'/>
	I0625 15:57:32.700882   36162 main.go:141] libmachine: (ha-674765-m03)     </serial>
	I0625 15:57:32.700892   36162 main.go:141] libmachine: (ha-674765-m03)     <console type='pty'>
	I0625 15:57:32.700913   36162 main.go:141] libmachine: (ha-674765-m03)       <target type='serial' port='0'/>
	I0625 15:57:32.700932   36162 main.go:141] libmachine: (ha-674765-m03)     </console>
	I0625 15:57:32.700944   36162 main.go:141] libmachine: (ha-674765-m03)     <rng model='virtio'>
	I0625 15:57:32.700953   36162 main.go:141] libmachine: (ha-674765-m03)       <backend model='random'>/dev/random</backend>
	I0625 15:57:32.700962   36162 main.go:141] libmachine: (ha-674765-m03)     </rng>
	I0625 15:57:32.700966   36162 main.go:141] libmachine: (ha-674765-m03)     
	I0625 15:57:32.700973   36162 main.go:141] libmachine: (ha-674765-m03)     
	I0625 15:57:32.700978   36162 main.go:141] libmachine: (ha-674765-m03)   </devices>
	I0625 15:57:32.700993   36162 main.go:141] libmachine: (ha-674765-m03) </domain>
	I0625 15:57:32.700999   36162 main.go:141] libmachine: (ha-674765-m03) 
	I0625 15:57:32.707312   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:06:25:01 in network default
	I0625 15:57:32.707869   36162 main.go:141] libmachine: (ha-674765-m03) Ensuring networks are active...
	I0625 15:57:32.707896   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:32.708594   36162 main.go:141] libmachine: (ha-674765-m03) Ensuring network default is active
	I0625 15:57:32.708856   36162 main.go:141] libmachine: (ha-674765-m03) Ensuring network mk-ha-674765 is active
	I0625 15:57:32.709236   36162 main.go:141] libmachine: (ha-674765-m03) Getting domain xml...
	I0625 15:57:32.709886   36162 main.go:141] libmachine: (ha-674765-m03) Creating domain...
	I0625 15:57:33.899693   36162 main.go:141] libmachine: (ha-674765-m03) Waiting to get IP...
	I0625 15:57:33.900360   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:33.900728   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:33.900768   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:33.900704   36953 retry.go:31] will retry after 189.370323ms: waiting for machine to come up
	I0625 15:57:34.092001   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:34.092489   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:34.092518   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:34.092447   36953 retry.go:31] will retry after 291.630508ms: waiting for machine to come up
	I0625 15:57:34.386127   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:34.386650   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:34.386683   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:34.386620   36953 retry.go:31] will retry after 457.585129ms: waiting for machine to come up
	I0625 15:57:34.845906   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:34.846363   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:34.846393   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:34.846314   36953 retry.go:31] will retry after 422.838014ms: waiting for machine to come up
	I0625 15:57:35.270927   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:35.271439   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:35.271489   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:35.271391   36953 retry.go:31] will retry after 708.280663ms: waiting for machine to come up
	I0625 15:57:35.981141   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:35.981691   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:35.981716   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:35.981645   36953 retry.go:31] will retry after 612.083185ms: waiting for machine to come up
	I0625 15:57:36.595308   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:36.595771   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:36.595799   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:36.595721   36953 retry.go:31] will retry after 1.0908696s: waiting for machine to come up
	I0625 15:57:37.688174   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:37.688629   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:37.688657   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:37.688557   36953 retry.go:31] will retry after 1.438169506s: waiting for machine to come up
	I0625 15:57:39.128827   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:39.129230   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:39.129260   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:39.129180   36953 retry.go:31] will retry after 1.56479191s: waiting for machine to come up
	I0625 15:57:40.696115   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:40.696651   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:40.696685   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:40.696588   36953 retry.go:31] will retry after 2.133683184s: waiting for machine to come up
	I0625 15:57:42.831736   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:42.832207   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:42.832234   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:42.832164   36953 retry.go:31] will retry after 2.653932997s: waiting for machine to come up
	I0625 15:57:45.487150   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:45.487513   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:45.487538   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:45.487478   36953 retry.go:31] will retry after 2.909129093s: waiting for machine to come up
	I0625 15:57:48.398685   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:48.399063   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find current IP address of domain ha-674765-m03 in network mk-ha-674765
	I0625 15:57:48.399085   36162 main.go:141] libmachine: (ha-674765-m03) DBG | I0625 15:57:48.399019   36953 retry.go:31] will retry after 3.985733944s: waiting for machine to come up
	I0625 15:57:52.386600   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.387072   36162 main.go:141] libmachine: (ha-674765-m03) Found IP for machine: 192.168.39.77
	I0625 15:57:52.387090   36162 main.go:141] libmachine: (ha-674765-m03) Reserving static IP address...
	I0625 15:57:52.387100   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has current primary IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.387489   36162 main.go:141] libmachine: (ha-674765-m03) DBG | unable to find host DHCP lease matching {name: "ha-674765-m03", mac: "52:54:00:82:ed:f4", ip: "192.168.39.77"} in network mk-ha-674765
	I0625 15:57:52.457146   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Getting to WaitForSSH function...
	I0625 15:57:52.457178   36162 main.go:141] libmachine: (ha-674765-m03) Reserved static IP address: 192.168.39.77
	I0625 15:57:52.457191   36162 main.go:141] libmachine: (ha-674765-m03) Waiting for SSH to be available...
	I0625 15:57:52.459845   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.460386   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:minikube Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:52.460410   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.460600   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Using SSH client type: external
	I0625 15:57:52.460631   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa (-rw-------)
	I0625 15:57:52.460668   36162 main.go:141] libmachine: (ha-674765-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0625 15:57:52.460685   36162 main.go:141] libmachine: (ha-674765-m03) DBG | About to run SSH command:
	I0625 15:57:52.460700   36162 main.go:141] libmachine: (ha-674765-m03) DBG | exit 0
	I0625 15:57:52.590423   36162 main.go:141] libmachine: (ha-674765-m03) DBG | SSH cmd err, output: <nil>: 
	I0625 15:57:52.590753   36162 main.go:141] libmachine: (ha-674765-m03) KVM machine creation complete!
	I0625 15:57:52.591027   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetConfigRaw
	I0625 15:57:52.591644   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:52.591853   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:52.592023   36162 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0625 15:57:52.592039   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetState
	I0625 15:57:52.593296   36162 main.go:141] libmachine: Detecting operating system of created instance...
	I0625 15:57:52.593309   36162 main.go:141] libmachine: Waiting for SSH to be available...
	I0625 15:57:52.593314   36162 main.go:141] libmachine: Getting to WaitForSSH function...
	I0625 15:57:52.593320   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:52.595498   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.595852   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:52.595878   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.595996   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:52.596158   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.596333   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.596476   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:52.596622   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:57:52.596866   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0625 15:57:52.596883   36162 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0625 15:57:52.713626   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 15:57:52.713648   36162 main.go:141] libmachine: Detecting the provisioner...
	I0625 15:57:52.713659   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:52.716664   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.717110   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:52.717136   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.717312   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:52.717486   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.717638   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.717774   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:52.717917   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:57:52.718128   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0625 15:57:52.718147   36162 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0625 15:57:52.830947   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0625 15:57:52.831013   36162 main.go:141] libmachine: found compatible host: buildroot
	I0625 15:57:52.831026   36162 main.go:141] libmachine: Provisioning with buildroot...
	I0625 15:57:52.831037   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetMachineName
	I0625 15:57:52.831265   36162 buildroot.go:166] provisioning hostname "ha-674765-m03"
	I0625 15:57:52.831290   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetMachineName
	I0625 15:57:52.831466   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:52.834163   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.834616   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:52.834642   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.834774   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:52.834930   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.835079   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.835204   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:52.835359   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:57:52.835508   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0625 15:57:52.835520   36162 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-674765-m03 && echo "ha-674765-m03" | sudo tee /etc/hostname
	I0625 15:57:52.960308   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-674765-m03
	
	I0625 15:57:52.960331   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:52.962661   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.962978   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:52.963006   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:52.963205   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:52.963393   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.963535   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:52.963676   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:52.963819   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:57:52.963965   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0625 15:57:52.963980   36162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-674765-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-674765-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-674765-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 15:57:53.091732   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 15:57:53.091760   36162 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 15:57:53.091793   36162 buildroot.go:174] setting up certificates
	I0625 15:57:53.091814   36162 provision.go:84] configureAuth start
	I0625 15:57:53.091837   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetMachineName
	I0625 15:57:53.092146   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 15:57:53.094875   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.095285   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.095314   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.095503   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:53.097543   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.097877   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.097905   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.097989   36162 provision.go:143] copyHostCerts
	I0625 15:57:53.098031   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 15:57:53.098081   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem, removing ...
	I0625 15:57:53.098092   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 15:57:53.098164   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 15:57:53.098262   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 15:57:53.098298   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem, removing ...
	I0625 15:57:53.098305   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 15:57:53.098353   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 15:57:53.098430   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 15:57:53.098461   36162 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem, removing ...
	I0625 15:57:53.098486   36162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 15:57:53.098522   36162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 15:57:53.098590   36162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.ha-674765-m03 san=[127.0.0.1 192.168.39.77 ha-674765-m03 localhost minikube]
	I0625 15:57:53.311582   36162 provision.go:177] copyRemoteCerts
	I0625 15:57:53.311635   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 15:57:53.311653   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:53.314426   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.314761   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.314794   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.315006   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:53.315210   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:53.315380   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:53.315572   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 15:57:53.405563   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0625 15:57:53.405628   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 15:57:53.430960   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0625 15:57:53.431019   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0625 15:57:53.454267   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0625 15:57:53.454322   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0625 15:57:53.477425   36162 provision.go:87] duration metric: took 385.597394ms to configureAuth
	I0625 15:57:53.477458   36162 buildroot.go:189] setting minikube options for container-runtime
	I0625 15:57:53.477688   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:57:53.477753   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:53.480334   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.480689   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.480715   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.480903   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:53.481116   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:53.481305   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:53.481413   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:53.481638   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:57:53.481794   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0625 15:57:53.481809   36162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 15:57:53.760941   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 15:57:53.760970   36162 main.go:141] libmachine: Checking connection to Docker...
	I0625 15:57:53.760978   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetURL
	I0625 15:57:53.762294   36162 main.go:141] libmachine: (ha-674765-m03) DBG | Using libvirt version 6000000
	I0625 15:57:53.764612   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.765018   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.765045   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.765213   36162 main.go:141] libmachine: Docker is up and running!
	I0625 15:57:53.765226   36162 main.go:141] libmachine: Reticulating splines...
	I0625 15:57:53.765232   36162 client.go:171] duration metric: took 21.602135409s to LocalClient.Create
	I0625 15:57:53.765251   36162 start.go:167] duration metric: took 21.602194985s to libmachine.API.Create "ha-674765"
	I0625 15:57:53.765260   36162 start.go:293] postStartSetup for "ha-674765-m03" (driver="kvm2")
	I0625 15:57:53.765268   36162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 15:57:53.765283   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:53.765514   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 15:57:53.765534   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:53.767703   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.768140   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.768154   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.768286   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:53.768453   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:53.768577   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:53.768673   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 15:57:53.857525   36162 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 15:57:53.861825   36162 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 15:57:53.861843   36162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 15:57:53.861905   36162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 15:57:53.861985   36162 filesync.go:149] local asset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> 212392.pem in /etc/ssl/certs
	I0625 15:57:53.861997   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /etc/ssl/certs/212392.pem
	I0625 15:57:53.862111   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0625 15:57:53.871438   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /etc/ssl/certs/212392.pem (1708 bytes)
	I0625 15:57:53.895481   36162 start.go:296] duration metric: took 130.210649ms for postStartSetup
	I0625 15:57:53.895531   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetConfigRaw
	I0625 15:57:53.896073   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 15:57:53.898403   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.898757   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.898780   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.899085   36162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 15:57:53.899301   36162 start.go:128] duration metric: took 21.754290804s to createHost
	I0625 15:57:53.899326   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:53.901351   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.901656   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:53.901678   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:53.901842   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:53.901997   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:53.902160   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:53.902294   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:53.902448   36162 main.go:141] libmachine: Using SSH client type: native
	I0625 15:57:53.902621   36162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0625 15:57:53.902642   36162 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0625 15:57:54.014840   36162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719331073.982993173
	
	I0625 15:57:54.014869   36162 fix.go:216] guest clock: 1719331073.982993173
	I0625 15:57:54.014880   36162 fix.go:229] Guest: 2024-06-25 15:57:53.982993173 +0000 UTC Remote: 2024-06-25 15:57:53.899314383 +0000 UTC m=+149.267137306 (delta=83.67879ms)
	I0625 15:57:54.014901   36162 fix.go:200] guest clock delta is within tolerance: 83.67879ms
	I0625 15:57:54.014909   36162 start.go:83] releasing machines lock for "ha-674765-m03", held for 21.86999563s
	I0625 15:57:54.014934   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:54.015185   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 15:57:54.017854   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:54.018181   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:54.018211   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:54.020506   36162 out.go:177] * Found network options:
	I0625 15:57:54.021955   36162 out.go:177]   - NO_PROXY=192.168.39.128,192.168.39.53
	W0625 15:57:54.023329   36162 proxy.go:119] fail to check proxy env: Error ip not in block
	W0625 15:57:54.023346   36162 proxy.go:119] fail to check proxy env: Error ip not in block
	I0625 15:57:54.023384   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:54.023829   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:54.023991   36162 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 15:57:54.024065   36162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 15:57:54.024107   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	W0625 15:57:54.024177   36162 proxy.go:119] fail to check proxy env: Error ip not in block
	W0625 15:57:54.024191   36162 proxy.go:119] fail to check proxy env: Error ip not in block
	I0625 15:57:54.024231   36162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 15:57:54.024243   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 15:57:54.026696   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:54.026882   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:54.027121   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:54.027151   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:54.027240   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:54.027372   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:54.027399   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:54.027441   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:54.027524   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 15:57:54.027592   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:54.027677   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 15:57:54.027744   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 15:57:54.027803   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 15:57:54.027910   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 15:57:54.258595   36162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0625 15:57:54.267463   36162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 15:57:54.267536   36162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 15:57:54.283400   36162 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0625 15:57:54.283418   36162 start.go:494] detecting cgroup driver to use...
	I0625 15:57:54.283474   36162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 15:57:54.301784   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 15:57:54.315951   36162 docker.go:217] disabling cri-docker service (if available) ...
	I0625 15:57:54.315991   36162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 15:57:54.330200   36162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 15:57:54.343260   36162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 15:57:54.458931   36162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 15:57:54.618633   36162 docker.go:233] disabling docker service ...
	I0625 15:57:54.618710   36162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 15:57:54.633242   36162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 15:57:54.646486   36162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 15:57:54.779838   36162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 15:57:54.903681   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 15:57:54.917606   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 15:57:54.939193   36162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0625 15:57:54.939255   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:57:54.950489   36162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 15:57:54.950553   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:57:54.961722   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:57:54.972476   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:57:54.982665   36162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 15:57:54.993259   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:57:55.003467   36162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:57:55.020931   36162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 15:57:55.031388   36162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 15:57:55.040605   36162 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0625 15:57:55.040648   36162 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0625 15:57:55.053598   36162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 15:57:55.063355   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:57:55.184293   36162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 15:57:55.333811   36162 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 15:57:55.333870   36162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 15:57:55.339038   36162 start.go:562] Will wait 60s for crictl version
	I0625 15:57:55.339088   36162 ssh_runner.go:195] Run: which crictl
	I0625 15:57:55.342848   36162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 15:57:55.381279   36162 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 15:57:55.381365   36162 ssh_runner.go:195] Run: crio --version
	I0625 15:57:55.409289   36162 ssh_runner.go:195] Run: crio --version
	I0625 15:57:55.447658   36162 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0625 15:57:55.448979   36162 out.go:177]   - env NO_PROXY=192.168.39.128
	I0625 15:57:55.450163   36162 out.go:177]   - env NO_PROXY=192.168.39.128,192.168.39.53
	I0625 15:57:55.451313   36162 main.go:141] libmachine: (ha-674765-m03) Calling .GetIP
	I0625 15:57:55.453968   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:55.454320   36162 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 15:57:55.454344   36162 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 15:57:55.454585   36162 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0625 15:57:55.458825   36162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 15:57:55.471650   36162 mustload.go:65] Loading cluster: ha-674765
	I0625 15:57:55.471847   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:57:55.472082   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:57:55.472119   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:57:55.486939   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42383
	I0625 15:57:55.487364   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:57:55.487847   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:57:55.487867   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:57:55.488184   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:57:55.488359   36162 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 15:57:55.489897   36162 host.go:66] Checking if "ha-674765" exists ...
	I0625 15:57:55.490184   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:57:55.490215   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:57:55.504303   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0625 15:57:55.504624   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:57:55.505032   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:57:55.505052   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:57:55.505333   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:57:55.505516   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:57:55.505649   36162 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765 for IP: 192.168.39.77
	I0625 15:57:55.505671   36162 certs.go:194] generating shared ca certs ...
	I0625 15:57:55.505692   36162 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:57:55.505823   36162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 15:57:55.505871   36162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 15:57:55.505883   36162 certs.go:256] generating profile certs ...
	I0625 15:57:55.505973   36162 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key
	I0625 15:57:55.506004   36162 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.bc4554f3
	I0625 15:57:55.506022   36162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.bc4554f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.53 192.168.39.77 192.168.39.254]
	I0625 15:57:55.648828   36162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.bc4554f3 ...
	I0625 15:57:55.648854   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.bc4554f3: {Name:mkb9321824526d9fcb14c00a8fe4d2304bf300a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:57:55.649008   36162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.bc4554f3 ...
	I0625 15:57:55.649019   36162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.bc4554f3: {Name:mk876eecb0530649eecba078952602b65db732ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:57:55.649083   36162 certs.go:381] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.bc4554f3 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt
	I0625 15:57:55.649198   36162 certs.go:385] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.bc4554f3 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key
	I0625 15:57:55.649323   36162 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key
	I0625 15:57:55.649338   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0625 15:57:55.649350   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0625 15:57:55.649363   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0625 15:57:55.649375   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0625 15:57:55.649388   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0625 15:57:55.649399   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0625 15:57:55.649411   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0625 15:57:55.649423   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0625 15:57:55.649463   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem (1338 bytes)
	W0625 15:57:55.649488   36162 certs.go:480] ignoring /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239_empty.pem, impossibly tiny 0 bytes
	I0625 15:57:55.649497   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 15:57:55.649529   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 15:57:55.649560   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 15:57:55.649591   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 15:57:55.649647   36162 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem (1708 bytes)
	I0625 15:57:55.649685   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem -> /usr/share/ca-certificates/21239.pem
	I0625 15:57:55.649701   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /usr/share/ca-certificates/212392.pem
	I0625 15:57:55.649715   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:57:55.649745   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:57:55.652612   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:57:55.652960   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:57:55.652982   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:57:55.653109   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:57:55.653285   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:57:55.653414   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:57:55.653539   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:57:55.730727   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0625 15:57:55.735497   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0625 15:57:55.746777   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0625 15:57:55.750883   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0625 15:57:55.762298   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0625 15:57:55.766477   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0625 15:57:55.776696   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0625 15:57:55.781544   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0625 15:57:55.791265   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0625 15:57:55.795550   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0625 15:57:55.805049   36162 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0625 15:57:55.809045   36162 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0625 15:57:55.819392   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 15:57:55.845523   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 15:57:55.869662   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 15:57:55.892727   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 15:57:55.916900   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0625 15:57:55.940788   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0625 15:57:55.964303   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 15:57:55.988802   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0625 15:57:56.012333   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem --> /usr/share/ca-certificates/21239.pem (1338 bytes)
	I0625 15:57:56.035727   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /usr/share/ca-certificates/212392.pem (1708 bytes)
	I0625 15:57:56.058836   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 15:57:56.082713   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0625 15:57:56.098551   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0625 15:57:56.115185   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0625 15:57:56.131213   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0625 15:57:56.147568   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0625 15:57:56.165389   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0625 15:57:56.182891   36162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0625 15:57:56.200285   36162 ssh_runner.go:195] Run: openssl version
	I0625 15:57:56.206203   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21239.pem && ln -fs /usr/share/ca-certificates/21239.pem /etc/ssl/certs/21239.pem"
	I0625 15:57:56.219074   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21239.pem
	I0625 15:57:56.223771   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 15:57:56.223812   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21239.pem
	I0625 15:57:56.230373   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21239.pem /etc/ssl/certs/51391683.0"
	I0625 15:57:56.242946   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212392.pem && ln -fs /usr/share/ca-certificates/212392.pem /etc/ssl/certs/212392.pem"
	I0625 15:57:56.255177   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212392.pem
	I0625 15:57:56.259689   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 15:57:56.259747   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212392.pem
	I0625 15:57:56.265463   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/212392.pem /etc/ssl/certs/3ec20f2e.0"
	I0625 15:57:56.277505   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 15:57:56.289907   36162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:57:56.294823   36162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:57:56.294870   36162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 15:57:56.300383   36162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 15:57:56.311084   36162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 15:57:56.314987   36162 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0625 15:57:56.315041   36162 kubeadm.go:928] updating node {m03 192.168.39.77 8443 v1.30.2 crio true true} ...
	I0625 15:57:56.315127   36162 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-674765-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 15:57:56.315152   36162 kube-vip.go:115] generating kube-vip config ...
	I0625 15:57:56.315186   36162 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0625 15:57:56.332544   36162 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0625 15:57:56.332596   36162 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0625 15:57:56.332645   36162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0625 15:57:56.342307   36162 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0625 15:57:56.342357   36162 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0625 15:57:56.352425   36162 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0625 15:57:56.352452   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0625 15:57:56.352471   36162 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0625 15:57:56.352488   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0625 15:57:56.352501   36162 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0625 15:57:56.352515   36162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0625 15:57:56.352550   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 15:57:56.352553   36162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0625 15:57:56.357066   36162 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0625 15:57:56.357093   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0625 15:57:56.384153   36162 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0625 15:57:56.384188   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0625 15:57:56.384219   36162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0625 15:57:56.384307   36162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0625 15:57:56.440143   36162 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0625 15:57:56.440181   36162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0625 15:57:57.210538   36162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0625 15:57:57.220712   36162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0625 15:57:57.238402   36162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 15:57:57.256107   36162 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0625 15:57:57.273920   36162 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0625 15:57:57.277976   36162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 15:57:57.292015   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:57:57.415561   36162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 15:57:57.433434   36162 host.go:66] Checking if "ha-674765" exists ...
	I0625 15:57:57.433886   36162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:57:57.433944   36162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:57:57.449349   36162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33363
	I0625 15:57:57.449733   36162 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:57:57.450211   36162 main.go:141] libmachine: Using API Version  1
	I0625 15:57:57.450232   36162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:57:57.450629   36162 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:57:57.450828   36162 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 15:57:57.450975   36162 start.go:316] joinCluster: &{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 15:57:57.451136   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0625 15:57:57.451169   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 15:57:57.454112   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:57:57.454577   36162 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 15:57:57.454605   36162 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 15:57:57.454752   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 15:57:57.454933   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 15:57:57.455098   36162 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 15:57:57.455256   36162 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 15:57:57.615676   36162 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:57:57.615727   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token sxxgbm.6pnydo1y71smfsmd --discovery-token-ca-cert-hash sha256:df4523a4334c80aff4a7c2fc7b4a73691744a675a28cdb3d4468287f693ab03d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-674765-m03 --control-plane --apiserver-advertise-address=192.168.39.77 --apiserver-bind-port=8443"
	I0625 15:58:19.941664   36162 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token sxxgbm.6pnydo1y71smfsmd --discovery-token-ca-cert-hash sha256:df4523a4334c80aff4a7c2fc7b4a73691744a675a28cdb3d4468287f693ab03d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-674765-m03 --control-plane --apiserver-advertise-address=192.168.39.77 --apiserver-bind-port=8443": (22.325905156s)
	I0625 15:58:19.941700   36162 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0625 15:58:20.572350   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-674765-m03 minikube.k8s.io/updated_at=2024_06_25T15_58_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b minikube.k8s.io/name=ha-674765 minikube.k8s.io/primary=false
	I0625 15:58:20.688902   36162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-674765-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0625 15:58:20.799607   36162 start.go:318] duration metric: took 23.348630958s to joinCluster
	I0625 15:58:20.799660   36162 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 15:58:20.800004   36162 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:58:20.801104   36162 out.go:177] * Verifying Kubernetes components...
	I0625 15:58:20.802436   36162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 15:58:21.103357   36162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 15:58:21.125097   36162 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:58:21.125357   36162 kapi.go:59] client config for ha-674765: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.crt", KeyFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key", CAFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0625 15:58:21.125426   36162 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.128:8443
	I0625 15:58:21.125637   36162 node_ready.go:35] waiting up to 6m0s for node "ha-674765-m03" to be "Ready" ...
	I0625 15:58:21.125711   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:21.125721   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:21.125732   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:21.125740   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:21.129364   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:21.626179   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:21.626199   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:21.626209   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:21.626213   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:21.636551   36162 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0625 15:58:22.126587   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:22.126607   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:22.126615   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:22.126620   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:22.130419   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:22.626424   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:22.626449   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:22.626460   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:22.626463   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:22.630458   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:23.126567   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:23.126592   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:23.126604   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:23.126610   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:23.130434   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:23.131066   36162 node_ready.go:53] node "ha-674765-m03" has status "Ready":"False"
	I0625 15:58:23.626527   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:23.626550   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:23.626560   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:23.626564   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:23.630032   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:24.125913   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:24.125937   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:24.125949   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:24.125957   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:24.128997   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:24.626825   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:24.626846   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:24.626854   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:24.626859   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:24.630142   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:25.126559   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:25.126580   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:25.126588   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:25.126592   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:25.129571   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:25.626433   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:25.626454   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:25.626464   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:25.626483   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:25.629930   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:25.630777   36162 node_ready.go:53] node "ha-674765-m03" has status "Ready":"False"
	I0625 15:58:26.125981   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:26.126003   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:26.126012   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:26.126016   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:26.130081   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:26.626721   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:26.626744   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:26.626756   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:26.626761   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:26.630683   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:27.126830   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:27.126855   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:27.126867   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:27.126873   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:27.130321   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:27.626460   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:27.626500   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:27.626509   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:27.626513   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:27.629237   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:28.126111   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:28.126132   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.126140   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.126145   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.129840   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:28.130827   36162 node_ready.go:53] node "ha-674765-m03" has status "Ready":"False"
	I0625 15:58:28.626156   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:28.626176   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.626185   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.626188   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.630375   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:28.631142   36162 node_ready.go:49] node "ha-674765-m03" has status "Ready":"True"
	I0625 15:58:28.631165   36162 node_ready.go:38] duration metric: took 7.505510142s for node "ha-674765-m03" to be "Ready" ...
	I0625 15:58:28.631177   36162 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 15:58:28.631252   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:58:28.631267   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.631276   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.631280   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.639163   36162 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0625 15:58:28.645727   36162 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-28db5" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.645795   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-28db5
	I0625 15:58:28.645807   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.645817   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.645823   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.648395   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:28.649046   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:28.649062   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.649072   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.649082   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.651681   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:28.652234   36162 pod_ready.go:92] pod "coredns-7db6d8ff4d-28db5" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:28.652252   36162 pod_ready.go:81] duration metric: took 6.503502ms for pod "coredns-7db6d8ff4d-28db5" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.652263   36162 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-84zkt" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.652320   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-84zkt
	I0625 15:58:28.652330   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.652340   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.652350   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.661307   36162 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0625 15:58:28.661992   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:28.662006   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.662016   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.662021   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.684062   36162 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0625 15:58:28.684759   36162 pod_ready.go:92] pod "coredns-7db6d8ff4d-84zkt" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:28.684776   36162 pod_ready.go:81] duration metric: took 32.502068ms for pod "coredns-7db6d8ff4d-84zkt" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.684789   36162 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.684853   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765
	I0625 15:58:28.684864   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.684874   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.684882   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.692708   36162 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0625 15:58:28.693424   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:28.693435   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.693442   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.693446   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.702178   36162 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0625 15:58:28.702897   36162 pod_ready.go:92] pod "etcd-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:28.702915   36162 pod_ready.go:81] duration metric: took 18.118053ms for pod "etcd-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.702926   36162 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.702975   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m02
	I0625 15:58:28.702987   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.702997   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.703007   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.711387   36162 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0625 15:58:28.712046   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:28.712067   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.712077   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.712082   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.718330   36162 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0625 15:58:28.718897   36162 pod_ready.go:92] pod "etcd-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:28.718914   36162 pod_ready.go:81] duration metric: took 15.981652ms for pod "etcd-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.718922   36162 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:28.826168   36162 request.go:629] Waited for 107.187135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:28.826225   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:28.826230   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:28.826238   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:28.826244   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:28.829951   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:29.026917   36162 request.go:629] Waited for 196.356128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:29.026986   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:29.026992   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:29.026999   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:29.027002   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:29.030159   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:29.226523   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:29.226543   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:29.226551   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:29.226555   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:29.230175   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:29.427146   36162 request.go:629] Waited for 196.324759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:29.427202   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:29.427207   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:29.427215   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:29.427219   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:29.430166   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:29.719996   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:29.720014   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:29.720022   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:29.720026   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:29.723890   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:29.826403   36162 request.go:629] Waited for 101.178342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:29.826448   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:29.826453   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:29.826460   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:29.826491   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:29.829211   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:30.219587   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:30.219611   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:30.219622   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:30.219627   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:30.223664   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:30.226852   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:30.226869   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:30.226877   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:30.226884   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:30.230088   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:30.719224   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:30.719254   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:30.719265   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:30.719270   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:30.722867   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:30.723549   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:30.723565   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:30.723575   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:30.723580   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:30.726547   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:30.727281   36162 pod_ready.go:102] pod "etcd-ha-674765-m03" in "kube-system" namespace has status "Ready":"False"
	I0625 15:58:31.219144   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:31.219169   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:31.219179   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:31.219186   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:31.223298   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:31.224233   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:31.224252   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:31.224263   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:31.224269   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:31.227155   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:31.720125   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:31.720150   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:31.720162   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:31.720167   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:31.723659   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:31.724457   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:31.724475   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:31.724485   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:31.724493   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:31.726925   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:32.220033   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:32.220068   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:32.220080   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:32.220088   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:32.224872   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:32.225501   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:32.225514   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:32.225525   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:32.225529   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:32.228598   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:32.719227   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:32.719258   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:32.719271   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:32.719276   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:32.723163   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:32.723990   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:32.724009   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:32.724021   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:32.724027   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:32.727211   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:32.727892   36162 pod_ready.go:102] pod "etcd-ha-674765-m03" in "kube-system" namespace has status "Ready":"False"
	I0625 15:58:33.219201   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:33.219230   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:33.219241   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:33.219248   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:33.223431   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:33.224332   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:33.224349   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:33.224358   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:33.224361   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:33.227456   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:33.720112   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:33.720135   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:33.720146   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:33.720152   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:33.724243   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:33.724982   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:33.724996   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:33.725004   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:33.725008   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:33.727577   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:34.219068   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:34.219092   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.219101   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.219106   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.222789   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:34.223667   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:34.223685   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.223695   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.223700   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.226290   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:34.719177   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-674765-m03
	I0625 15:58:34.719205   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.719216   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.719222   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.723967   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:34.724690   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:34.724706   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.724713   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.724718   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.727196   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:34.727673   36162 pod_ready.go:92] pod "etcd-ha-674765-m03" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:34.727695   36162 pod_ready.go:81] duration metric: took 6.008765887s for pod "etcd-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:34.727719   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:34.727787   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765
	I0625 15:58:34.727796   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.727809   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.727817   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.730233   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:34.731397   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:34.731415   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.731423   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.731428   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.733788   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:34.734356   36162 pod_ready.go:92] pod "kube-apiserver-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:34.734374   36162 pod_ready.go:81] duration metric: took 6.644453ms for pod "kube-apiserver-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:34.734382   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:34.734438   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m02
	I0625 15:58:34.734449   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.734459   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.734487   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.736696   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:34.737264   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:34.737283   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.737293   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.737300   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.739591   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:34.740138   36162 pod_ready.go:92] pod "kube-apiserver-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:34.740156   36162 pod_ready.go:81] duration metric: took 5.766096ms for pod "kube-apiserver-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:34.740166   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:34.826542   36162 request.go:629] Waited for 86.319241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m03
	I0625 15:58:34.826615   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-674765-m03
	I0625 15:58:34.826623   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:34.826630   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:34.826637   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:34.830069   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:35.026189   36162 request.go:629] Waited for 195.115459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:35.026250   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:35.026255   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:35.026262   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:35.026266   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:35.030657   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:35.031158   36162 pod_ready.go:92] pod "kube-apiserver-ha-674765-m03" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:35.031176   36162 pod_ready.go:81] duration metric: took 291.001645ms for pod "kube-apiserver-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:35.031185   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:35.226547   36162 request.go:629] Waited for 195.302496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765
	I0625 15:58:35.226619   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765
	I0625 15:58:35.226626   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:35.226635   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:35.226641   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:35.230134   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:35.427026   36162 request.go:629] Waited for 196.04705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:35.427114   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:35.427123   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:35.427137   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:35.427143   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:35.430233   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:35.430896   36162 pod_ready.go:92] pod "kube-controller-manager-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:35.430914   36162 pod_ready.go:81] duration metric: took 399.722704ms for pod "kube-controller-manager-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:35.430923   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:35.626668   36162 request.go:629] Waited for 195.688648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765-m02
	I0625 15:58:35.626755   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765-m02
	I0625 15:58:35.626766   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:35.626777   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:35.626785   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:35.630604   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:35.826972   36162 request.go:629] Waited for 195.349311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:35.827023   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:35.827029   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:35.827040   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:35.827045   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:35.830575   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:35.831239   36162 pod_ready.go:92] pod "kube-controller-manager-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:35.831260   36162 pod_ready.go:81] duration metric: took 400.329985ms for pod "kube-controller-manager-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:35.831273   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:36.026223   36162 request.go:629] Waited for 194.87977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765-m03
	I0625 15:58:36.026285   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-674765-m03
	I0625 15:58:36.026294   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:36.026314   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:36.026334   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:36.029365   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:36.226358   36162 request.go:629] Waited for 196.299154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:36.226430   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:36.226441   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:36.226453   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:36.226460   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:36.230009   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:36.230751   36162 pod_ready.go:92] pod "kube-controller-manager-ha-674765-m03" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:36.230772   36162 pod_ready.go:81] duration metric: took 399.490216ms for pod "kube-controller-manager-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:36.230785   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lsmft" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:36.426859   36162 request.go:629] Waited for 195.997385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lsmft
	I0625 15:58:36.426956   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lsmft
	I0625 15:58:36.426968   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:36.426975   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:36.426982   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:36.429723   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:36.627217   36162 request.go:629] Waited for 196.650446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:36.627314   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:36.627325   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:36.627337   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:36.627350   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:36.630619   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:36.631438   36162 pod_ready.go:92] pod "kube-proxy-lsmft" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:36.631456   36162 pod_ready.go:81] duration metric: took 400.664094ms for pod "kube-proxy-lsmft" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:36.631464   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rh9n5" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:36.826547   36162 request.go:629] Waited for 195.025136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rh9n5
	I0625 15:58:36.826650   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rh9n5
	I0625 15:58:36.826663   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:36.826675   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:36.826683   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:36.829983   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:37.027064   36162 request.go:629] Waited for 196.337499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:37.027150   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:37.027161   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:37.027171   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:37.027176   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:37.030113   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:37.030746   36162 pod_ready.go:92] pod "kube-proxy-rh9n5" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:37.030765   36162 pod_ready.go:81] duration metric: took 399.29603ms for pod "kube-proxy-rh9n5" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:37.030774   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-swfsx" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:37.227213   36162 request.go:629] Waited for 196.369052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-swfsx
	I0625 15:58:37.227268   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-swfsx
	I0625 15:58:37.227273   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:37.227281   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:37.227286   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:37.230330   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:37.426492   36162 request.go:629] Waited for 195.357462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:37.426543   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:37.426548   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:37.426555   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:37.426560   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:37.429824   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:37.430641   36162 pod_ready.go:92] pod "kube-proxy-swfsx" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:37.430661   36162 pod_ready.go:81] duration metric: took 399.881552ms for pod "kube-proxy-swfsx" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:37.430669   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:37.627091   36162 request.go:629] Waited for 196.368488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765
	I0625 15:58:37.627159   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765
	I0625 15:58:37.627180   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:37.627195   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:37.627200   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:37.630762   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:37.827002   36162 request.go:629] Waited for 195.371695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:37.827078   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765
	I0625 15:58:37.827084   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:37.827092   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:37.827099   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:37.830911   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:37.831841   36162 pod_ready.go:92] pod "kube-scheduler-ha-674765" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:37.831860   36162 pod_ready.go:81] duration metric: took 401.186016ms for pod "kube-scheduler-ha-674765" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:37.831869   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:38.026546   36162 request.go:629] Waited for 194.603271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765-m02
	I0625 15:58:38.026599   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765-m02
	I0625 15:58:38.026603   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:38.026609   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:38.026614   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:38.029502   36162 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0625 15:58:38.226557   36162 request.go:629] Waited for 196.38695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:38.226648   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m02
	I0625 15:58:38.226689   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:38.226705   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:38.226709   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:38.230980   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:38.232238   36162 pod_ready.go:92] pod "kube-scheduler-ha-674765-m02" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:38.232276   36162 pod_ready.go:81] duration metric: took 400.379729ms for pod "kube-scheduler-ha-674765-m02" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:38.232286   36162 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:38.426342   36162 request.go:629] Waited for 193.98135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765-m03
	I0625 15:58:38.426430   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-674765-m03
	I0625 15:58:38.426439   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:38.426453   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:38.426462   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:38.429567   36162 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0625 15:58:38.626312   36162 request.go:629] Waited for 196.10206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:38.626366   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-674765-m03
	I0625 15:58:38.626372   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:38.626379   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:38.626383   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:38.630649   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:38.631277   36162 pod_ready.go:92] pod "kube-scheduler-ha-674765-m03" in "kube-system" namespace has status "Ready":"True"
	I0625 15:58:38.631296   36162 pod_ready.go:81] duration metric: took 399.000574ms for pod "kube-scheduler-ha-674765-m03" in "kube-system" namespace to be "Ready" ...
	I0625 15:58:38.631310   36162 pod_ready.go:38] duration metric: took 10.000120706s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 15:58:38.631330   36162 api_server.go:52] waiting for apiserver process to appear ...
	I0625 15:58:38.631388   36162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 15:58:38.649832   36162 api_server.go:72] duration metric: took 17.850151268s to wait for apiserver process to appear ...
	I0625 15:58:38.649848   36162 api_server.go:88] waiting for apiserver healthz status ...
	I0625 15:58:38.649862   36162 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0625 15:58:38.656751   36162 api_server.go:279] https://192.168.39.128:8443/healthz returned 200:
	ok
	I0625 15:58:38.656819   36162 round_trippers.go:463] GET https://192.168.39.128:8443/version
	I0625 15:58:38.656831   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:38.656841   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:38.656850   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:38.658054   36162 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0625 15:58:38.658108   36162 api_server.go:141] control plane version: v1.30.2
	I0625 15:58:38.658123   36162 api_server.go:131] duration metric: took 8.269474ms to wait for apiserver health ...
	I0625 15:58:38.658130   36162 system_pods.go:43] waiting for kube-system pods to appear ...
	I0625 15:58:38.826522   36162 request.go:629] Waited for 168.332415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:58:38.826620   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:58:38.826631   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:38.826642   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:38.826651   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:38.833753   36162 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0625 15:58:38.841234   36162 system_pods.go:59] 24 kube-system pods found
	I0625 15:58:38.841260   36162 system_pods.go:61] "coredns-7db6d8ff4d-28db5" [1426e4a3-2f25-47e9-9b28-b23a81a3a19a] Running
	I0625 15:58:38.841267   36162 system_pods.go:61] "coredns-7db6d8ff4d-84zkt" [2f6426f8-a0c4-470c-b2b1-b62fa304c078] Running
	I0625 15:58:38.841271   36162 system_pods.go:61] "etcd-ha-674765" [a8f7d82c-8fc7-4190-99c2-0bedc24d8f4f] Running
	I0625 15:58:38.841276   36162 system_pods.go:61] "etcd-ha-674765-m02" [e3f94832-96fe-4bbf-8c53-86bab692b6a9] Running
	I0625 15:58:38.841281   36162 system_pods.go:61] "etcd-ha-674765-m03" [19a0a3e5-4f97-4ec1-9131-2cb687d36d77] Running
	I0625 15:58:38.841286   36162 system_pods.go:61] "kindnet-kkgdq" [cfb408ee-0f73-4537-87fb-fad3d2b1f3f1] Running
	I0625 15:58:38.841291   36162 system_pods.go:61] "kindnet-ntq77" [37736a9f-5b4c-421c-9027-81e961ab8550] Running
	I0625 15:58:38.841295   36162 system_pods.go:61] "kindnet-px4dn" [27ef663b-4867-4757-9e02-5086d4875471] Running
	I0625 15:58:38.841299   36162 system_pods.go:61] "kube-apiserver-ha-674765" [594e5a19-d80b-4b26-8c91-a8475fb99630] Running
	I0625 15:58:38.841304   36162 system_pods.go:61] "kube-apiserver-ha-674765-m02" [e00ad102-e252-49e9-82e4-b466ae4eb7b2] Running
	I0625 15:58:38.841309   36162 system_pods.go:61] "kube-apiserver-ha-674765-m03" [90f8d49f-694e-4872-9a70-c1211b79cefd] Running
	I0625 15:58:38.841314   36162 system_pods.go:61] "kube-controller-manager-ha-674765" [5f4f1e7d-f796-4762-9f33-61755c0daef3] Running
	I0625 15:58:38.841322   36162 system_pods.go:61] "kube-controller-manager-ha-674765-m02" [acb4b5ca-b29e-4866-be68-eb4c6425463d] Running
	I0625 15:58:38.841328   36162 system_pods.go:61] "kube-controller-manager-ha-674765-m03" [69ff2a00-e5ef-406d-aad3-aeb3fc0768b4] Running
	I0625 15:58:38.841333   36162 system_pods.go:61] "kube-proxy-lsmft" [fa5d210a-1295-497c-8a24-6a0f0dc941de] Running
	I0625 15:58:38.841338   36162 system_pods.go:61] "kube-proxy-rh9n5" [a0a24539-3168-42cc-93b3-d0b1e283d0bd] Running
	I0625 15:58:38.841347   36162 system_pods.go:61] "kube-proxy-swfsx" [d1d30f80-d2b4-4d24-8322-69850b1f882a] Running
	I0625 15:58:38.841353   36162 system_pods.go:61] "kube-scheduler-ha-674765" [2695280a-4dd5-4073-875e-63e5238bd1b7] Running
	I0625 15:58:38.841362   36162 system_pods.go:61] "kube-scheduler-ha-674765-m02" [dc04f489-1084-48d4-8cec-c79ec30e0987] Running
	I0625 15:58:38.841367   36162 system_pods.go:61] "kube-scheduler-ha-674765-m03" [231cafab-eb37-496f-aa2d-662d27d18ef0] Running
	I0625 15:58:38.841372   36162 system_pods.go:61] "kube-vip-ha-674765" [1d132475-65bb-43d1-9353-12b7be1f311f] Running
	I0625 15:58:38.841378   36162 system_pods.go:61] "kube-vip-ha-674765-m02" [dbde28c7-a109-4a7e-97bb-27576a94d2fe] Running
	I0625 15:58:38.841384   36162 system_pods.go:61] "kube-vip-ha-674765-m03" [08c72802-7f04-47c2-956a-8adc1a430e56] Running
	I0625 15:58:38.841390   36162 system_pods.go:61] "storage-provisioner" [c227c5cf-2bd6-4ebf-9fdb-09d4229cf421] Running
	I0625 15:58:38.841398   36162 system_pods.go:74] duration metric: took 183.259451ms to wait for pod list to return data ...
	I0625 15:58:38.841410   36162 default_sa.go:34] waiting for default service account to be created ...
	I0625 15:58:39.026820   36162 request.go:629] Waited for 185.339864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0625 15:58:39.026887   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0625 15:58:39.026892   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:39.026900   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:39.026904   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:39.032234   36162 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0625 15:58:39.032513   36162 default_sa.go:45] found service account: "default"
	I0625 15:58:39.032532   36162 default_sa.go:55] duration metric: took 191.115688ms for default service account to be created ...
	I0625 15:58:39.032544   36162 system_pods.go:116] waiting for k8s-apps to be running ...
	I0625 15:58:39.226977   36162 request.go:629] Waited for 194.363988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:58:39.227057   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0625 15:58:39.227067   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:39.227080   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:39.227086   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:39.236119   36162 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0625 15:58:39.242986   36162 system_pods.go:86] 24 kube-system pods found
	I0625 15:58:39.243010   36162 system_pods.go:89] "coredns-7db6d8ff4d-28db5" [1426e4a3-2f25-47e9-9b28-b23a81a3a19a] Running
	I0625 15:58:39.243018   36162 system_pods.go:89] "coredns-7db6d8ff4d-84zkt" [2f6426f8-a0c4-470c-b2b1-b62fa304c078] Running
	I0625 15:58:39.243025   36162 system_pods.go:89] "etcd-ha-674765" [a8f7d82c-8fc7-4190-99c2-0bedc24d8f4f] Running
	I0625 15:58:39.243031   36162 system_pods.go:89] "etcd-ha-674765-m02" [e3f94832-96fe-4bbf-8c53-86bab692b6a9] Running
	I0625 15:58:39.243043   36162 system_pods.go:89] "etcd-ha-674765-m03" [19a0a3e5-4f97-4ec1-9131-2cb687d36d77] Running
	I0625 15:58:39.243050   36162 system_pods.go:89] "kindnet-kkgdq" [cfb408ee-0f73-4537-87fb-fad3d2b1f3f1] Running
	I0625 15:58:39.243056   36162 system_pods.go:89] "kindnet-ntq77" [37736a9f-5b4c-421c-9027-81e961ab8550] Running
	I0625 15:58:39.243064   36162 system_pods.go:89] "kindnet-px4dn" [27ef663b-4867-4757-9e02-5086d4875471] Running
	I0625 15:58:39.243073   36162 system_pods.go:89] "kube-apiserver-ha-674765" [594e5a19-d80b-4b26-8c91-a8475fb99630] Running
	I0625 15:58:39.243080   36162 system_pods.go:89] "kube-apiserver-ha-674765-m02" [e00ad102-e252-49e9-82e4-b466ae4eb7b2] Running
	I0625 15:58:39.243091   36162 system_pods.go:89] "kube-apiserver-ha-674765-m03" [90f8d49f-694e-4872-9a70-c1211b79cefd] Running
	I0625 15:58:39.243101   36162 system_pods.go:89] "kube-controller-manager-ha-674765" [5f4f1e7d-f796-4762-9f33-61755c0daef3] Running
	I0625 15:58:39.243110   36162 system_pods.go:89] "kube-controller-manager-ha-674765-m02" [acb4b5ca-b29e-4866-be68-eb4c6425463d] Running
	I0625 15:58:39.243119   36162 system_pods.go:89] "kube-controller-manager-ha-674765-m03" [69ff2a00-e5ef-406d-aad3-aeb3fc0768b4] Running
	I0625 15:58:39.243128   36162 system_pods.go:89] "kube-proxy-lsmft" [fa5d210a-1295-497c-8a24-6a0f0dc941de] Running
	I0625 15:58:39.243134   36162 system_pods.go:89] "kube-proxy-rh9n5" [a0a24539-3168-42cc-93b3-d0b1e283d0bd] Running
	I0625 15:58:39.243140   36162 system_pods.go:89] "kube-proxy-swfsx" [d1d30f80-d2b4-4d24-8322-69850b1f882a] Running
	I0625 15:58:39.243146   36162 system_pods.go:89] "kube-scheduler-ha-674765" [2695280a-4dd5-4073-875e-63e5238bd1b7] Running
	I0625 15:58:39.243153   36162 system_pods.go:89] "kube-scheduler-ha-674765-m02" [dc04f489-1084-48d4-8cec-c79ec30e0987] Running
	I0625 15:58:39.243164   36162 system_pods.go:89] "kube-scheduler-ha-674765-m03" [231cafab-eb37-496f-aa2d-662d27d18ef0] Running
	I0625 15:58:39.243173   36162 system_pods.go:89] "kube-vip-ha-674765" [1d132475-65bb-43d1-9353-12b7be1f311f] Running
	I0625 15:58:39.243180   36162 system_pods.go:89] "kube-vip-ha-674765-m02" [dbde28c7-a109-4a7e-97bb-27576a94d2fe] Running
	I0625 15:58:39.243189   36162 system_pods.go:89] "kube-vip-ha-674765-m03" [08c72802-7f04-47c2-956a-8adc1a430e56] Running
	I0625 15:58:39.243195   36162 system_pods.go:89] "storage-provisioner" [c227c5cf-2bd6-4ebf-9fdb-09d4229cf421] Running
	I0625 15:58:39.243206   36162 system_pods.go:126] duration metric: took 210.656126ms to wait for k8s-apps to be running ...
	I0625 15:58:39.243220   36162 system_svc.go:44] waiting for kubelet service to be running ....
	I0625 15:58:39.243270   36162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 15:58:39.260565   36162 system_svc.go:56] duration metric: took 17.338537ms WaitForService to wait for kubelet
	I0625 15:58:39.260592   36162 kubeadm.go:576] duration metric: took 18.46091276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 15:58:39.260612   36162 node_conditions.go:102] verifying NodePressure condition ...
	I0625 15:58:39.426892   36162 request.go:629] Waited for 166.223413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes
	I0625 15:58:39.426957   36162 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes
	I0625 15:58:39.426963   36162 round_trippers.go:469] Request Headers:
	I0625 15:58:39.426975   36162 round_trippers.go:473]     Accept: application/json, */*
	I0625 15:58:39.426981   36162 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0625 15:58:39.431813   36162 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0625 15:58:39.432826   36162 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0625 15:58:39.432848   36162 node_conditions.go:123] node cpu capacity is 2
	I0625 15:58:39.432860   36162 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0625 15:58:39.432864   36162 node_conditions.go:123] node cpu capacity is 2
	I0625 15:58:39.432868   36162 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0625 15:58:39.432871   36162 node_conditions.go:123] node cpu capacity is 2
	I0625 15:58:39.432874   36162 node_conditions.go:105] duration metric: took 172.258695ms to run NodePressure ...
	I0625 15:58:39.432888   36162 start.go:240] waiting for startup goroutines ...
	I0625 15:58:39.432913   36162 start.go:254] writing updated cluster config ...
	I0625 15:58:39.433196   36162 ssh_runner.go:195] Run: rm -f paused
	I0625 15:58:39.484755   36162 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0625 15:58:39.486626   36162 out.go:177] * Done! kubectl is now configured to use "ha-674765" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 25 16:03:05 ha-674765 crio[684]: time="2024-06-25 16:03:05.969801184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331385969780660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fe1f7e4-cb7f-4952-a32e-461daf23560b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:03:05 ha-674765 crio[684]: time="2024-06-25 16:03:05.970546257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6f200b9-6bdf-45f4-94dd-b4c5455c6736 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:03:05 ha-674765 crio[684]: time="2024-06-25 16:03:05.970596947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6f200b9-6bdf-45f4-94dd-b4c5455c6736 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:03:05 ha-674765 crio[684]: time="2024-06-25 16:03:05.970811179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331123602460126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982140184531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982105059965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a641a439e9e1a4812e2d701e924065cf043e82fbeeb31138efc1da913f59e,PodSandboxId:8c17c2f81c12f58083ae9c6e26c825dc4701f9b68cbf01e1583716d703bc9269,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1719330981985152081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3562eeca26a1a1131c2f80e823f3f8779c3e235bf200331ce891f51b37df0c,PodSandboxId:1623f777feead6fabf15a4e29139791f4c38ed435a0c368cdb1dfebf1a45ec64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:171933098
0130797768,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719330979753070308,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ed8ce894547a7bc3deba857b5d7d733af8ba225cb579c469f090460bff27d3,PodSandboxId:ebc12e3f7a7ce5a2b5a6c7beddfe956ea5de58d27aa020dc6979043c872fc752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719330963710738330,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41be34697cf0082e06e8923557664cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719330960414652980,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9938e238e129cd0d797a5de776e0d7b756bc8f39188223f4151974b19fb7506c,PodSandboxId:e3236a96cfba0a3dd95041d0792f8fa934df06572898ae6514913c5050b5fe9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719330960405239254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a40f818bed683af529089283a92813b3d87d93d9cb9290b6081645f3bced82fa,PodSandboxId:fb68107e9ae65837eff4df8cb043150d9fab87c80158db9f2658dcb99e1ae72c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719330960357440842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719330960349402853,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6f200b9-6bdf-45f4-94dd-b4c5455c6736 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.012784353Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4dbe1878-1bb9-48c2-923f-e8fa2e50f324 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.012926082Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4dbe1878-1bb9-48c2-923f-e8fa2e50f324 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.013996462Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95890a12-3895-41f8-83cd-c41d6385917b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.014414606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331386014391797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95890a12-3895-41f8-83cd-c41d6385917b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.015004869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fde0f16-a2b0-4519-b3cf-e71c9e7cea57 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.015074534Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fde0f16-a2b0-4519-b3cf-e71c9e7cea57 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.015545177Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331123602460126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982140184531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982105059965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a641a439e9e1a4812e2d701e924065cf043e82fbeeb31138efc1da913f59e,PodSandboxId:8c17c2f81c12f58083ae9c6e26c825dc4701f9b68cbf01e1583716d703bc9269,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1719330981985152081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3562eeca26a1a1131c2f80e823f3f8779c3e235bf200331ce891f51b37df0c,PodSandboxId:1623f777feead6fabf15a4e29139791f4c38ed435a0c368cdb1dfebf1a45ec64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:171933098
0130797768,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719330979753070308,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ed8ce894547a7bc3deba857b5d7d733af8ba225cb579c469f090460bff27d3,PodSandboxId:ebc12e3f7a7ce5a2b5a6c7beddfe956ea5de58d27aa020dc6979043c872fc752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719330963710738330,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41be34697cf0082e06e8923557664cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719330960414652980,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9938e238e129cd0d797a5de776e0d7b756bc8f39188223f4151974b19fb7506c,PodSandboxId:e3236a96cfba0a3dd95041d0792f8fa934df06572898ae6514913c5050b5fe9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719330960405239254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a40f818bed683af529089283a92813b3d87d93d9cb9290b6081645f3bced82fa,PodSandboxId:fb68107e9ae65837eff4df8cb043150d9fab87c80158db9f2658dcb99e1ae72c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719330960357440842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719330960349402853,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fde0f16-a2b0-4519-b3cf-e71c9e7cea57 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.053847950Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9cc4bc02-e92b-48d4-a406-e75cb4299746 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.053998013Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9cc4bc02-e92b-48d4-a406-e75cb4299746 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.055329193Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78c3c192-199c-465a-a35c-7d0549d5a875 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.055746333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331386055724243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78c3c192-199c-465a-a35c-7d0549d5a875 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.056422836Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60a052f0-8a43-4b59-92fc-202e92e45bf9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.056593394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60a052f0-8a43-4b59-92fc-202e92e45bf9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.056844220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331123602460126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982140184531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982105059965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a641a439e9e1a4812e2d701e924065cf043e82fbeeb31138efc1da913f59e,PodSandboxId:8c17c2f81c12f58083ae9c6e26c825dc4701f9b68cbf01e1583716d703bc9269,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1719330981985152081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3562eeca26a1a1131c2f80e823f3f8779c3e235bf200331ce891f51b37df0c,PodSandboxId:1623f777feead6fabf15a4e29139791f4c38ed435a0c368cdb1dfebf1a45ec64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:171933098
0130797768,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719330979753070308,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ed8ce894547a7bc3deba857b5d7d733af8ba225cb579c469f090460bff27d3,PodSandboxId:ebc12e3f7a7ce5a2b5a6c7beddfe956ea5de58d27aa020dc6979043c872fc752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719330963710738330,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41be34697cf0082e06e8923557664cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719330960414652980,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9938e238e129cd0d797a5de776e0d7b756bc8f39188223f4151974b19fb7506c,PodSandboxId:e3236a96cfba0a3dd95041d0792f8fa934df06572898ae6514913c5050b5fe9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719330960405239254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a40f818bed683af529089283a92813b3d87d93d9cb9290b6081645f3bced82fa,PodSandboxId:fb68107e9ae65837eff4df8cb043150d9fab87c80158db9f2658dcb99e1ae72c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719330960357440842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719330960349402853,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60a052f0-8a43-4b59-92fc-202e92e45bf9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.097236203Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3786b40-df5e-4f50-9cdc-c167079327ac name=/runtime.v1.RuntimeService/Version
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.097306554Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3786b40-df5e-4f50-9cdc-c167079327ac name=/runtime.v1.RuntimeService/Version
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.098607037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94fd0711-4097-4359-9426-4bcd506c67da name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.099112923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331386099090824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94fd0711-4097-4359-9426-4bcd506c67da name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.099700312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1b7d4ee-c9e9-4b9a-a23d-53174f7c0c19 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.099757156Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1b7d4ee-c9e9-4b9a-a23d-53174f7c0c19 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:03:06 ha-674765 crio[684]: time="2024-06-25 16:03:06.100236251Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331123602460126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982140184531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719330982105059965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a641a439e9e1a4812e2d701e924065cf043e82fbeeb31138efc1da913f59e,PodSandboxId:8c17c2f81c12f58083ae9c6e26c825dc4701f9b68cbf01e1583716d703bc9269,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1719330981985152081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3562eeca26a1a1131c2f80e823f3f8779c3e235bf200331ce891f51b37df0c,PodSandboxId:1623f777feead6fabf15a4e29139791f4c38ed435a0c368cdb1dfebf1a45ec64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:171933098
0130797768,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719330979753070308,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ed8ce894547a7bc3deba857b5d7d733af8ba225cb579c469f090460bff27d3,PodSandboxId:ebc12e3f7a7ce5a2b5a6c7beddfe956ea5de58d27aa020dc6979043c872fc752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719330963710738330,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41be34697cf0082e06e8923557664cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719330960414652980,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9938e238e129cd0d797a5de776e0d7b756bc8f39188223f4151974b19fb7506c,PodSandboxId:e3236a96cfba0a3dd95041d0792f8fa934df06572898ae6514913c5050b5fe9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719330960405239254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a40f818bed683af529089283a92813b3d87d93d9cb9290b6081645f3bced82fa,PodSandboxId:fb68107e9ae65837eff4df8cb043150d9fab87c80158db9f2658dcb99e1ae72c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719330960357440842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719330960349402853,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1b7d4ee-c9e9-4b9a-a23d-53174f7c0c19 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dd7837c56cda3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   d18f421cdb437       busybox-fc5497c4f-qjw4r
	ec00b1016861e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   2249d5de30294       coredns-7db6d8ff4d-84zkt
	5dff3834f63a3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   36a6cd372769c       coredns-7db6d8ff4d-28db5
	6e1a641a439e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   8c17c2f81c12f       storage-provisioner
	ff3562eeca26a       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      6 minutes ago       Running             kindnet-cni               0                   1623f777feead       kindnet-ntq77
	7cea2f95fa7a7       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      6 minutes ago       Running             kube-proxy                0                   41bb01e505abe       kube-proxy-rh9n5
	c3ed8ce894547       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   ebc12e3f7a7ce       kube-vip-ha-674765
	a7ed432b8fb61       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      7 minutes ago       Running             kube-scheduler            0                   3498fabc6b53a       kube-scheduler-ha-674765
	9938e238e129c       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      7 minutes ago       Running             kube-controller-manager   0                   e3236a96cfba0       kube-controller-manager-ha-674765
	a40f818bed683       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      7 minutes ago       Running             kube-apiserver            0                   fb68107e9ae65       kube-apiserver-ha-674765
	e903f61a215f1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   4695ac9edbc50       etcd-ha-674765
	
	
	==> coredns [5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8] <==
	[INFO] 10.244.1.2:37149 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135115s
	[INFO] 10.244.1.2:55180 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000186786s
	[INFO] 10.244.0.4:51274 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116315s
	[INFO] 10.244.0.4:58927 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00175173s
	[INFO] 10.244.0.4:58086 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147783s
	[INFO] 10.244.0.4:40292 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069393s
	[INFO] 10.244.0.4:47923 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008723s
	[INFO] 10.244.2.2:43607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173082s
	[INFO] 10.244.2.2:58140 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152475s
	[INFO] 10.244.2.2:58321 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00137128s
	[INFO] 10.244.2.2:51827 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149446s
	[INFO] 10.244.1.2:53516 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091184s
	[INFO] 10.244.1.2:50837 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111518s
	[INFO] 10.244.0.4:36638 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096918s
	[INFO] 10.244.0.4:34420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062938s
	[INFO] 10.244.2.2:47727 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109009s
	[INFO] 10.244.2.2:53547 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114146s
	[INFO] 10.244.2.2:52427 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103325s
	[INFO] 10.244.0.4:35396 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015274s
	[INFO] 10.244.0.4:37070 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000162346s
	[INFO] 10.244.0.4:34499 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000181932s
	[INFO] 10.244.2.2:39406 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141568s
	[INFO] 10.244.2.2:45012 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125003s
	[INFO] 10.244.2.2:37480 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111741s
	[INFO] 10.244.2.2:38163 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160497s
	
	
	==> coredns [ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b] <==
	[INFO] 10.244.1.2:59350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001047557s
	[INFO] 10.244.0.4:38331 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001513057s
	[INFO] 10.244.2.2:38263 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00022826s
	[INFO] 10.244.1.2:37269 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228534s
	[INFO] 10.244.1.2:37116 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000238078s
	[INFO] 10.244.1.2:57875 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221098s
	[INFO] 10.244.1.2:50144 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003188543s
	[INFO] 10.244.1.2:52779 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142142s
	[INFO] 10.244.0.4:54632 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118741s
	[INFO] 10.244.0.4:42979 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001269082s
	[INFO] 10.244.0.4:36713 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084451s
	[INFO] 10.244.2.2:41583 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001597985s
	[INFO] 10.244.2.2:38518 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007901s
	[INFO] 10.244.2.2:36859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163343s
	[INFO] 10.244.2.2:48049 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012051s
	[INFO] 10.244.1.2:41596 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099989s
	[INFO] 10.244.1.2:53657 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152026s
	[INFO] 10.244.0.4:37328 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010546s
	[INFO] 10.244.0.4:37107 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078111s
	[INFO] 10.244.2.2:58260 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109644s
	[INFO] 10.244.1.2:51838 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161138s
	[INFO] 10.244.1.2:34544 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000245952s
	[INFO] 10.244.1.2:41848 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133045s
	[INFO] 10.244.1.2:55838 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000180767s
	[INFO] 10.244.0.4:56384 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068132s
	
	
	==> describe nodes <==
	Name:               ha-674765
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_25T15_56_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:56:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:03:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 15:59:10 +0000   Tue, 25 Jun 2024 15:56:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 15:59:10 +0000   Tue, 25 Jun 2024 15:56:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 15:59:10 +0000   Tue, 25 Jun 2024 15:56:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 15:59:10 +0000   Tue, 25 Jun 2024 15:56:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-674765
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9f74a4b042742c8a0ef29e697c6459c
	  System UUID:                b9f74a4b-0427-42c8-a0ef-29e697c6459c
	  Boot ID:                    52ea2189-696e-4985-bf6b-90448e3e85aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qjw4r              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 coredns-7db6d8ff4d-28db5             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m47s
	  kube-system                 coredns-7db6d8ff4d-84zkt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m47s
	  kube-system                 etcd-ha-674765                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m
	  kube-system                 kindnet-ntq77                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m47s
	  kube-system                 kube-apiserver-ha-674765             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 kube-controller-manager-ha-674765    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 kube-proxy-rh9n5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	  kube-system                 kube-scheduler-ha-674765             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m1s
	  kube-system                 kube-vip-ha-674765                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m46s                kube-proxy       
	  Normal  NodeHasSufficientPID     7m7s (x7 over 7m7s)  kubelet          Node ha-674765 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m7s (x8 over 7m7s)  kubelet          Node ha-674765 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m7s (x8 over 7m7s)  kubelet          Node ha-674765 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m                   kubelet          Node ha-674765 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m                   kubelet          Node ha-674765 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m                   kubelet          Node ha-674765 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m48s                node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Normal  NodeReady                6m45s                kubelet          Node ha-674765 status is now: NodeReady
	  Normal  RegisteredNode           5m39s                node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Normal  RegisteredNode           4m31s                node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	
	
	Name:               ha-674765-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_25T15_57_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:57:09 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 15:59:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 25 Jun 2024 15:59:12 +0000   Tue, 25 Jun 2024 16:00:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 25 Jun 2024 15:59:12 +0000   Tue, 25 Jun 2024 16:00:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 25 Jun 2024 15:59:12 +0000   Tue, 25 Jun 2024 16:00:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 25 Jun 2024 15:59:12 +0000   Tue, 25 Jun 2024 16:00:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-674765-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 45ee8176fa3149fdb7e4bac2256c26b7
	  System UUID:                45ee8176-fa31-49fd-b7e4-bac2256c26b7
	  Boot ID:                    3d0db961-cfa5-4af0-9483-cceea6d2d005
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jx6j4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 etcd-ha-674765-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m55s
	  kube-system                 kindnet-kkgdq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m57s
	  kube-system                 kube-apiserver-ha-674765-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-controller-manager-ha-674765-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-proxy-lsmft                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-scheduler-ha-674765-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-vip-ha-674765-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m57s (x8 over 5m57s)  kubelet          Node ha-674765-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m57s (x8 over 5m57s)  kubelet          Node ha-674765-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m57s (x7 over 5m57s)  kubelet          Node ha-674765-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m53s                  node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  RegisteredNode           4m31s                  node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  NodeNotReady             2m41s                  node-controller  Node ha-674765-m02 status is now: NodeNotReady
	
	
	Name:               ha-674765-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_25T15_58_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:58:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:03:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 15:58:48 +0000   Tue, 25 Jun 2024 15:58:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 15:58:48 +0000   Tue, 25 Jun 2024 15:58:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 15:58:48 +0000   Tue, 25 Jun 2024 15:58:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 15:58:48 +0000   Tue, 25 Jun 2024 15:58:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.77
	  Hostname:    ha-674765-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 82d78f3bf896447aa83d147c6be1d104
	  System UUID:                82d78f3b-f896-447a-a83d-147c6be1d104
	  Boot ID:                    9e6335e7-1ac0-4745-936d-85efc228a44f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vn65x                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 etcd-ha-674765-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m47s
	  kube-system                 kindnet-px4dn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m49s
	  kube-system                 kube-apiserver-ha-674765-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-controller-manager-ha-674765-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-proxy-swfsx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-scheduler-ha-674765-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-vip-ha-674765-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m44s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m49s (x8 over 4m49s)  kubelet          Node ha-674765-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s (x8 over 4m49s)  kubelet          Node ha-674765-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s (x7 over 4m49s)  kubelet          Node ha-674765-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m48s                  node-controller  Node ha-674765-m03 event: Registered Node ha-674765-m03 in Controller
	  Normal  RegisteredNode           4m44s                  node-controller  Node ha-674765-m03 event: Registered Node ha-674765-m03 in Controller
	  Normal  RegisteredNode           4m31s                  node-controller  Node ha-674765-m03 event: Registered Node ha-674765-m03 in Controller
	
	
	Name:               ha-674765-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_25T15_59_18_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:59:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:03:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 15:59:48 +0000   Tue, 25 Jun 2024 15:59:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 15:59:48 +0000   Tue, 25 Jun 2024 15:59:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 15:59:48 +0000   Tue, 25 Jun 2024 15:59:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 15:59:48 +0000   Tue, 25 Jun 2024 15:59:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-674765-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 153487087a1a4805965ecc96230ab164
	  System UUID:                15348708-7a1a-4805-965e-cc96230ab164
	  Boot ID:                    6eb50b4e-74eb-4263-80f2-15c137071776
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6z24k       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m49s
	  kube-system                 kube-proxy-szzwh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m49s (x2 over 3m49s)  kubelet          Node ha-674765-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x2 over 3m49s)  kubelet          Node ha-674765-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x2 over 3m49s)  kubelet          Node ha-674765-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal  RegisteredNode           3m46s                  node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal  RegisteredNode           3m44s                  node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal  NodeReady                3m38s                  kubelet          Node ha-674765-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun25 15:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051304] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040153] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.502989] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.368528] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.612454] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.515677] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.054245] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062657] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.163326] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.122319] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.250574] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.069829] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +3.840914] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.060181] kauditd_printk_skb: 158 callbacks suppressed
	[Jun25 15:56] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +0.085967] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.321595] kauditd_printk_skb: 21 callbacks suppressed
	[Jun25 15:57] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32] <==
	{"level":"warn","ts":"2024-06-25T16:03:06.375738Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.382381Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.385578Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.394144Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.402028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.40913Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.415203Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.419619Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.422759Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.43264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.438272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.440325Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.444095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.444987Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.449018Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.453122Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.463483Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.469106Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.474805Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.477511Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.48281Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.487801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.493946Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.499417Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-25T16:03:06.540158Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 16:03:06 up 7 min,  0 users,  load average: 0.02, 0.09, 0.06
	Linux ha-674765 5.10.207 #1 SMP Mon Jun 24 21:03:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ff3562eeca26a1a1131c2f80e823f3f8779c3e235bf200331ce891f51b37df0c] <==
	I0625 16:02:31.437975       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	I0625 16:02:41.447173       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0625 16:02:41.447316       1 main.go:227] handling current node
	I0625 16:02:41.447390       1 main.go:223] Handling node with IPs: map[192.168.39.53:{}]
	I0625 16:02:41.447414       1 main.go:250] Node ha-674765-m02 has CIDR [10.244.1.0/24] 
	I0625 16:02:41.447579       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I0625 16:02:41.447599       1 main.go:250] Node ha-674765-m03 has CIDR [10.244.2.0/24] 
	I0625 16:02:41.447658       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0625 16:02:41.447676       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	I0625 16:02:51.453396       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0625 16:02:51.453483       1 main.go:227] handling current node
	I0625 16:02:51.453507       1 main.go:223] Handling node with IPs: map[192.168.39.53:{}]
	I0625 16:02:51.453523       1 main.go:250] Node ha-674765-m02 has CIDR [10.244.1.0/24] 
	I0625 16:02:51.453634       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I0625 16:02:51.453653       1 main.go:250] Node ha-674765-m03 has CIDR [10.244.2.0/24] 
	I0625 16:02:51.453709       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0625 16:02:51.453730       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	I0625 16:03:01.467802       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0625 16:03:01.467844       1 main.go:227] handling current node
	I0625 16:03:01.467858       1 main.go:223] Handling node with IPs: map[192.168.39.53:{}]
	I0625 16:03:01.467914       1 main.go:250] Node ha-674765-m02 has CIDR [10.244.1.0/24] 
	I0625 16:03:01.468068       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I0625 16:03:01.468102       1 main.go:250] Node ha-674765-m03 has CIDR [10.244.2.0/24] 
	I0625 16:03:01.468198       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0625 16:03:01.468250       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a40f818bed683af529089283a92813b3d87d93d9cb9290b6081645f3bced82fa] <==
	I0625 15:56:04.930005       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0625 15:56:05.151090       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0625 15:56:06.646123       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0625 15:56:06.673651       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0625 15:56:06.684625       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0625 15:56:19.306091       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0625 15:56:19.358752       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0625 15:58:18.307766       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0625 15:58:18.307842       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0625 15:58:18.308003       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 5.3µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0625 15:58:18.309224       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0625 15:58:18.309349       1 timeout.go:142] post-timeout activity - time-elapsed: 1.703289ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0625 15:58:45.099490       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50334: use of closed network connection
	E0625 15:58:45.278184       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50354: use of closed network connection
	E0625 15:58:45.463496       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50368: use of closed network connection
	E0625 15:58:45.673613       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50384: use of closed network connection
	E0625 15:58:45.856439       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50396: use of closed network connection
	E0625 15:58:46.041584       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50416: use of closed network connection
	E0625 15:58:46.218419       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50424: use of closed network connection
	E0625 15:58:46.559429       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50442: use of closed network connection
	E0625 15:58:46.844474       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50462: use of closed network connection
	E0625 15:58:47.016062       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50488: use of closed network connection
	E0625 15:58:47.208625       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50508: use of closed network connection
	E0625 15:58:47.392225       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50522: use of closed network connection
	E0625 15:58:47.579536       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50530: use of closed network connection
	
	
	==> kube-controller-manager [9938e238e129cd0d797a5de776e0d7b756bc8f39188223f4151974b19fb7506c] <==
	I0625 15:58:18.552671       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-674765-m03"
	I0625 15:58:40.402823       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.498371ms"
	I0625 15:58:40.450576       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.600006ms"
	I0625 15:58:40.450701       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.696µs"
	I0625 15:58:40.629374       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="176.065135ms"
	I0625 15:58:40.696690       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.239142ms"
	I0625 15:58:40.752178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.389917ms"
	I0625 15:58:40.752293       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.326µs"
	I0625 15:58:40.822785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.424524ms"
	I0625 15:58:40.823068       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.258µs"
	I0625 15:58:40.921183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.309693ms"
	I0625 15:58:40.921346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.638µs"
	I0625 15:58:44.287931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.53826ms"
	I0625 15:58:44.298399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.405907ms"
	I0625 15:58:44.298630       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.473µs"
	I0625 15:58:44.634066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.779607ms"
	I0625 15:58:44.634342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.308µs"
	I0625 15:59:17.920783       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-674765-m04\" does not exist"
	E0625 15:59:17.923263       1 certificate_controller.go:146] Sync csr-d2mwl failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-d2mwl": the object has been modified; please apply your changes to the latest version and try again
	I0625 15:59:17.955139       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-674765-m04" podCIDRs=["10.244.3.0/24"]
	I0625 15:59:18.578564       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-674765-m04"
	I0625 15:59:28.607395       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-674765-m04"
	I0625 16:00:25.912792       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-674765-m04"
	I0625 16:00:26.009569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.392548ms"
	I0625 16:00:26.009659       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.249µs"
	
	
	==> kube-proxy [7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c] <==
	I0625 15:56:19.915548       1 server_linux.go:69] "Using iptables proxy"
	I0625 15:56:19.937492       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.128"]
	I0625 15:56:19.974432       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0625 15:56:19.974479       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0625 15:56:19.974492       1 server_linux.go:165] "Using iptables Proxier"
	I0625 15:56:19.977183       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0625 15:56:19.977364       1 server.go:872] "Version info" version="v1.30.2"
	I0625 15:56:19.977392       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 15:56:19.978794       1 config.go:192] "Starting service config controller"
	I0625 15:56:19.978825       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0625 15:56:19.978847       1 config.go:101] "Starting endpoint slice config controller"
	I0625 15:56:19.978851       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0625 15:56:19.979407       1 config.go:319] "Starting node config controller"
	I0625 15:56:19.979431       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0625 15:56:20.079672       1 shared_informer.go:320] Caches are synced for node config
	I0625 15:56:20.079718       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0625 15:56:20.079734       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65] <==
	E0625 15:56:04.238145       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0625 15:56:04.238102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0625 15:56:04.238183       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0625 15:56:04.303707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0625 15:56:04.303821       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0625 15:56:04.327725       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0625 15:56:04.327837       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0625 15:56:04.409284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0625 15:56:04.409328       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0625 15:56:04.438451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0625 15:56:04.438497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0625 15:56:06.878478       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0625 15:58:40.377246       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-vn65x\": pod busybox-fc5497c4f-vn65x is already assigned to node \"ha-674765-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-vn65x" node="ha-674765-m03"
	E0625 15:58:40.377969       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-vn65x\": pod busybox-fc5497c4f-vn65x is already assigned to node \"ha-674765-m03\"" pod="default/busybox-fc5497c4f-vn65x"
	I0625 15:58:40.378142       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-vn65x" node="ha-674765-m03"
	E0625 15:59:18.009813       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6z24k\": pod kindnet-6z24k is already assigned to node \"ha-674765-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-6z24k" node="ha-674765-m04"
	E0625 15:59:18.009977       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6z24k\": pod kindnet-6z24k is already assigned to node \"ha-674765-m04\"" pod="kube-system/kindnet-6z24k"
	E0625 15:59:18.010569       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-szzwh\": pod kube-proxy-szzwh is already assigned to node \"ha-674765-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-szzwh" node="ha-674765-m04"
	E0625 15:59:18.010649       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 825f1e68-aec0-44cf-9817-b248a6078673(kube-system/kube-proxy-szzwh) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-szzwh"
	E0625 15:59:18.010677       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-szzwh\": pod kube-proxy-szzwh is already assigned to node \"ha-674765-m04\"" pod="kube-system/kube-proxy-szzwh"
	I0625 15:59:18.010702       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-szzwh" node="ha-674765-m04"
	E0625 15:59:18.040032       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-g48pp\": pod kube-proxy-g48pp is already assigned to node \"ha-674765-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-g48pp" node="ha-674765-m04"
	E0625 15:59:18.040108       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ab1311ca-030d-4407-87ba-2ff9c8b8feed(kube-system/kube-proxy-g48pp) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-g48pp"
	E0625 15:59:18.040132       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-g48pp\": pod kube-proxy-g48pp is already assigned to node \"ha-674765-m04\"" pod="kube-system/kube-proxy-g48pp"
	I0625 15:59:18.040154       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-g48pp" node="ha-674765-m04"
	
	
	==> kubelet <==
	Jun 25 15:59:06 ha-674765 kubelet[1375]: E0625 15:59:06.604778    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 15:59:06 ha-674765 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 15:59:06 ha-674765 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 15:59:06 ha-674765 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 15:59:06 ha-674765 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 25 16:00:06 ha-674765 kubelet[1375]: E0625 16:00:06.603591    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 16:00:06 ha-674765 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:00:06 ha-674765 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:00:06 ha-674765 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:00:06 ha-674765 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 25 16:01:06 ha-674765 kubelet[1375]: E0625 16:01:06.605369    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 16:01:06 ha-674765 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:01:06 ha-674765 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:01:06 ha-674765 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:01:06 ha-674765 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 25 16:02:06 ha-674765 kubelet[1375]: E0625 16:02:06.604802    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 16:02:06 ha-674765 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:02:06 ha-674765 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:02:06 ha-674765 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:02:06 ha-674765 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 25 16:03:06 ha-674765 kubelet[1375]: E0625 16:03:06.603441    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 16:03:06 ha-674765 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:03:06 ha-674765 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:03:06 ha-674765 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:03:06 ha-674765 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-674765 -n ha-674765
helpers_test.go:261: (dbg) Run:  kubectl --context ha-674765 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (53.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (359.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-674765 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-674765 -v=7 --alsologtostderr
E0625 16:04:29.127722   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
E0625 16:04:56.815427   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-674765 -v=7 --alsologtostderr: exit status 82 (2m1.905437064s)

                                                
                                                
-- stdout --
	* Stopping node "ha-674765-m04"  ...
	* Stopping node "ha-674765-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:03:07.911777   41926 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:03:07.912050   41926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:03:07.912060   41926 out.go:304] Setting ErrFile to fd 2...
	I0625 16:03:07.912064   41926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:03:07.912269   41926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:03:07.912531   41926 out.go:298] Setting JSON to false
	I0625 16:03:07.912624   41926 mustload.go:65] Loading cluster: ha-674765
	I0625 16:03:07.912987   41926 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:03:07.913086   41926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 16:03:07.913265   41926 mustload.go:65] Loading cluster: ha-674765
	I0625 16:03:07.913436   41926 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:03:07.913470   41926 stop.go:39] StopHost: ha-674765-m04
	I0625 16:03:07.913884   41926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:03:07.913944   41926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:03:07.928656   41926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41371
	I0625 16:03:07.928997   41926 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:03:07.929471   41926 main.go:141] libmachine: Using API Version  1
	I0625 16:03:07.929490   41926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:03:07.929792   41926 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:03:07.932107   41926 out.go:177] * Stopping node "ha-674765-m04"  ...
	I0625 16:03:07.933771   41926 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0625 16:03:07.933803   41926 main.go:141] libmachine: (ha-674765-m04) Calling .DriverName
	I0625 16:03:07.934018   41926 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0625 16:03:07.934044   41926 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHHostname
	I0625 16:03:07.936722   41926 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:03:07.937058   41926 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:59:02 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:03:07.937099   41926 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:03:07.937223   41926 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHPort
	I0625 16:03:07.937382   41926 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHKeyPath
	I0625 16:03:07.937546   41926 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHUsername
	I0625 16:03:07.937714   41926 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m04/id_rsa Username:docker}
	I0625 16:03:08.021301   41926 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0625 16:03:08.074392   41926 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0625 16:03:08.127891   41926 main.go:141] libmachine: Stopping "ha-674765-m04"...
	I0625 16:03:08.127932   41926 main.go:141] libmachine: (ha-674765-m04) Calling .GetState
	I0625 16:03:08.129422   41926 main.go:141] libmachine: (ha-674765-m04) Calling .Stop
	I0625 16:03:08.132652   41926 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 0/120
	I0625 16:03:09.354497   41926 main.go:141] libmachine: (ha-674765-m04) Calling .GetState
	I0625 16:03:09.356504   41926 main.go:141] libmachine: Machine "ha-674765-m04" was stopped.
	I0625 16:03:09.356521   41926 stop.go:75] duration metric: took 1.422751277s to stop
	I0625 16:03:09.356558   41926 stop.go:39] StopHost: ha-674765-m03
	I0625 16:03:09.356967   41926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:03:09.357022   41926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:03:09.373084   41926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I0625 16:03:09.373427   41926 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:03:09.373901   41926 main.go:141] libmachine: Using API Version  1
	I0625 16:03:09.373919   41926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:03:09.374203   41926 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:03:09.376162   41926 out.go:177] * Stopping node "ha-674765-m03"  ...
	I0625 16:03:09.377330   41926 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0625 16:03:09.377356   41926 main.go:141] libmachine: (ha-674765-m03) Calling .DriverName
	I0625 16:03:09.377550   41926 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0625 16:03:09.377579   41926 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHHostname
	I0625 16:03:09.380376   41926 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:03:09.380833   41926 main.go:141] libmachine: (ha-674765-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ed:f4", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:57:47 +0000 UTC Type:0 Mac:52:54:00:82:ed:f4 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-674765-m03 Clientid:01:52:54:00:82:ed:f4}
	I0625 16:03:09.380853   41926 main.go:141] libmachine: (ha-674765-m03) DBG | domain ha-674765-m03 has defined IP address 192.168.39.77 and MAC address 52:54:00:82:ed:f4 in network mk-ha-674765
	I0625 16:03:09.380985   41926 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHPort
	I0625 16:03:09.381142   41926 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHKeyPath
	I0625 16:03:09.381268   41926 main.go:141] libmachine: (ha-674765-m03) Calling .GetSSHUsername
	I0625 16:03:09.381403   41926 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m03/id_rsa Username:docker}
	I0625 16:03:09.472138   41926 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0625 16:03:09.527321   41926 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0625 16:03:09.580938   41926 main.go:141] libmachine: Stopping "ha-674765-m03"...
	I0625 16:03:09.580958   41926 main.go:141] libmachine: (ha-674765-m03) Calling .GetState
	I0625 16:03:09.582322   41926 main.go:141] libmachine: (ha-674765-m03) Calling .Stop
	I0625 16:03:09.585571   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 0/120
	I0625 16:03:10.586965   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 1/120
	I0625 16:03:11.588341   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 2/120
	I0625 16:03:12.589776   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 3/120
	I0625 16:03:13.591534   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 4/120
	I0625 16:03:14.593697   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 5/120
	I0625 16:03:15.595525   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 6/120
	I0625 16:03:16.596927   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 7/120
	I0625 16:03:17.598213   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 8/120
	I0625 16:03:18.599771   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 9/120
	I0625 16:03:19.601891   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 10/120
	I0625 16:03:20.603196   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 11/120
	I0625 16:03:21.605531   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 12/120
	I0625 16:03:22.607316   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 13/120
	I0625 16:03:23.609016   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 14/120
	I0625 16:03:24.610673   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 15/120
	I0625 16:03:25.612169   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 16/120
	I0625 16:03:26.613717   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 17/120
	I0625 16:03:27.615214   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 18/120
	I0625 16:03:28.616590   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 19/120
	I0625 16:03:29.618368   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 20/120
	I0625 16:03:30.619829   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 21/120
	I0625 16:03:31.621218   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 22/120
	I0625 16:03:32.622603   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 23/120
	I0625 16:03:33.623921   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 24/120
	I0625 16:03:34.625670   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 25/120
	I0625 16:03:35.626977   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 26/120
	I0625 16:03:36.629106   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 27/120
	I0625 16:03:37.630524   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 28/120
	I0625 16:03:38.632316   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 29/120
	I0625 16:03:39.634445   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 30/120
	I0625 16:03:40.635873   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 31/120
	I0625 16:03:41.637405   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 32/120
	I0625 16:03:42.638842   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 33/120
	I0625 16:03:43.641043   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 34/120
	I0625 16:03:44.642853   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 35/120
	I0625 16:03:45.644181   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 36/120
	I0625 16:03:46.645744   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 37/120
	I0625 16:03:47.646967   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 38/120
	I0625 16:03:48.648909   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 39/120
	I0625 16:03:49.650450   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 40/120
	I0625 16:03:50.652106   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 41/120
	I0625 16:03:51.653303   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 42/120
	I0625 16:03:52.654695   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 43/120
	I0625 16:03:53.656147   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 44/120
	I0625 16:03:54.657837   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 45/120
	I0625 16:03:55.659147   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 46/120
	I0625 16:03:56.660870   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 47/120
	I0625 16:03:57.662126   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 48/120
	I0625 16:03:58.663951   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 49/120
	I0625 16:03:59.665574   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 50/120
	I0625 16:04:00.666880   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 51/120
	I0625 16:04:01.668951   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 52/120
	I0625 16:04:02.670414   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 53/120
	I0625 16:04:03.671706   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 54/120
	I0625 16:04:04.673340   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 55/120
	I0625 16:04:05.674599   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 56/120
	I0625 16:04:06.675867   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 57/120
	I0625 16:04:07.677282   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 58/120
	I0625 16:04:08.678558   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 59/120
	I0625 16:04:09.680337   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 60/120
	I0625 16:04:10.681712   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 61/120
	I0625 16:04:11.683085   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 62/120
	I0625 16:04:12.684546   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 63/120
	I0625 16:04:13.685643   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 64/120
	I0625 16:04:14.687164   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 65/120
	I0625 16:04:15.688435   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 66/120
	I0625 16:04:16.689839   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 67/120
	I0625 16:04:17.692067   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 68/120
	I0625 16:04:18.693458   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 69/120
	I0625 16:04:19.695270   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 70/120
	I0625 16:04:20.697060   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 71/120
	I0625 16:04:21.698462   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 72/120
	I0625 16:04:22.699864   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 73/120
	I0625 16:04:23.701276   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 74/120
	I0625 16:04:24.702597   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 75/120
	I0625 16:04:25.703737   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 76/120
	I0625 16:04:26.705076   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 77/120
	I0625 16:04:27.706425   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 78/120
	I0625 16:04:28.708077   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 79/120
	I0625 16:04:29.710509   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 80/120
	I0625 16:04:30.711679   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 81/120
	I0625 16:04:31.712971   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 82/120
	I0625 16:04:32.714115   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 83/120
	I0625 16:04:33.715439   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 84/120
	I0625 16:04:34.716817   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 85/120
	I0625 16:04:35.718016   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 86/120
	I0625 16:04:36.719383   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 87/120
	I0625 16:04:37.720789   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 88/120
	I0625 16:04:38.722394   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 89/120
	I0625 16:04:39.723878   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 90/120
	I0625 16:04:40.725223   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 91/120
	I0625 16:04:41.726426   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 92/120
	I0625 16:04:42.727606   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 93/120
	I0625 16:04:43.728950   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 94/120
	I0625 16:04:44.730559   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 95/120
	I0625 16:04:45.731781   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 96/120
	I0625 16:04:46.732983   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 97/120
	I0625 16:04:47.734261   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 98/120
	I0625 16:04:48.735585   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 99/120
	I0625 16:04:49.737275   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 100/120
	I0625 16:04:50.738598   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 101/120
	I0625 16:04:51.739966   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 102/120
	I0625 16:04:52.741231   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 103/120
	I0625 16:04:53.742570   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 104/120
	I0625 16:04:54.744089   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 105/120
	I0625 16:04:55.745363   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 106/120
	I0625 16:04:56.747099   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 107/120
	I0625 16:04:57.748497   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 108/120
	I0625 16:04:58.749826   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 109/120
	I0625 16:04:59.751177   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 110/120
	I0625 16:05:00.752859   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 111/120
	I0625 16:05:01.754367   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 112/120
	I0625 16:05:02.755787   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 113/120
	I0625 16:05:03.757183   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 114/120
	I0625 16:05:04.759069   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 115/120
	I0625 16:05:05.760869   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 116/120
	I0625 16:05:06.762358   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 117/120
	I0625 16:05:07.763744   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 118/120
	I0625 16:05:08.765932   41926 main.go:141] libmachine: (ha-674765-m03) Waiting for machine to stop 119/120
	I0625 16:05:09.766681   41926 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0625 16:05:09.766755   41926 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0625 16:05:09.768735   41926 out.go:177] 
	W0625 16:05:09.770373   41926 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0625 16:05:09.770392   41926 out.go:239] * 
	* 
	W0625 16:05:09.772627   41926 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0625 16:05:09.773970   41926 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-674765 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-674765 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-674765 --wait=true -v=7 --alsologtostderr: (3m54.836136799s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-674765
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-674765 -n ha-674765
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-674765 logs -n 25: (1.934369946s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-674765 cp ha-674765-m03:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m02:/home/docker/cp-test_ha-674765-m03_ha-674765-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m02 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m03_ha-674765-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m03:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04:/home/docker/cp-test_ha-674765-m03_ha-674765-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m04 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m03_ha-674765-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-674765 cp testdata/cp-test.txt                                                | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2213486447/001/cp-test_ha-674765-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765:/home/docker/cp-test_ha-674765-m04_ha-674765.txt                       |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765 sudo cat                                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m04_ha-674765.txt                                 |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m02:/home/docker/cp-test_ha-674765-m04_ha-674765-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m02 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m04_ha-674765-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03:/home/docker/cp-test_ha-674765-m04_ha-674765-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m03 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m04_ha-674765-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-674765 node stop m02 -v=7                                                     | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-674765 node start m02 -v=7                                                    | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 16:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-674765 -v=7                                                           | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 16:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-674765 -v=7                                                                | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 16:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-674765 --wait=true -v=7                                                    | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 16:05 UTC | 25 Jun 24 16:09 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-674765                                                                | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 16:09 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/25 16:05:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0625 16:05:09.817265   42394 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:05:09.817520   42394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:05:09.817529   42394 out.go:304] Setting ErrFile to fd 2...
	I0625 16:05:09.817534   42394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:05:09.817691   42394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:05:09.818224   42394 out.go:298] Setting JSON to false
	I0625 16:05:09.819082   42394 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6454,"bootTime":1719325056,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 16:05:09.819137   42394 start.go:139] virtualization: kvm guest
	I0625 16:05:09.821289   42394 out.go:177] * [ha-674765] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0625 16:05:09.822774   42394 out.go:177]   - MINIKUBE_LOCATION=19128
	I0625 16:05:09.822801   42394 notify.go:220] Checking for updates...
	I0625 16:05:09.825480   42394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 16:05:09.826758   42394 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 16:05:09.827938   42394 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:05:09.829113   42394 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0625 16:05:09.830302   42394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0625 16:05:09.831775   42394 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:05:09.831878   42394 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 16:05:09.832267   42394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:05:09.832318   42394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:05:09.847483   42394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I0625 16:05:09.847979   42394 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:05:09.848550   42394 main.go:141] libmachine: Using API Version  1
	I0625 16:05:09.848574   42394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:05:09.848930   42394 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:05:09.849094   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:05:09.882021   42394 out.go:177] * Using the kvm2 driver based on existing profile
	I0625 16:05:09.883677   42394 start.go:297] selected driver: kvm2
	I0625 16:05:09.883690   42394 start.go:901] validating driver "kvm2" against &{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:05:09.883853   42394 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0625 16:05:09.884271   42394 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:05:09.884343   42394 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19128-13846/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0625 16:05:09.898595   42394 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0625 16:05:09.899222   42394 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 16:05:09.899251   42394 cni.go:84] Creating CNI manager for ""
	I0625 16:05:09.899258   42394 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0625 16:05:09.899331   42394 start.go:340] cluster config:
	{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:05:09.899470   42394 iso.go:125] acquiring lock: {Name:mk76df652d5e768afc73443035d5ecb8b75ed16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:05:09.901092   42394 out.go:177] * Starting "ha-674765" primary control-plane node in "ha-674765" cluster
	I0625 16:05:09.902178   42394 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 16:05:09.902211   42394 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0625 16:05:09.902221   42394 cache.go:56] Caching tarball of preloaded images
	I0625 16:05:09.902276   42394 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 16:05:09.902286   42394 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0625 16:05:09.902397   42394 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 16:05:09.902603   42394 start.go:360] acquireMachinesLock for ha-674765: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 16:05:09.902655   42394 start.go:364] duration metric: took 35.527µs to acquireMachinesLock for "ha-674765"
	I0625 16:05:09.902668   42394 start.go:96] Skipping create...Using existing machine configuration
	I0625 16:05:09.902678   42394 fix.go:54] fixHost starting: 
	I0625 16:05:09.902947   42394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:05:09.902976   42394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:05:09.915910   42394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36393
	I0625 16:05:09.916316   42394 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:05:09.916838   42394 main.go:141] libmachine: Using API Version  1
	I0625 16:05:09.916859   42394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:05:09.917146   42394 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:05:09.917310   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:05:09.917445   42394 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 16:05:09.919017   42394 fix.go:112] recreateIfNeeded on ha-674765: state=Running err=<nil>
	W0625 16:05:09.919049   42394 fix.go:138] unexpected machine state, will restart: <nil>
	I0625 16:05:09.920810   42394 out.go:177] * Updating the running kvm2 "ha-674765" VM ...
	I0625 16:05:09.922001   42394 machine.go:94] provisionDockerMachine start ...
	I0625 16:05:09.922027   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:05:09.922232   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:05:09.924729   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:09.925157   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:05:09.925179   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:09.925377   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:05:09.925551   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:09.925704   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:09.925852   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:05:09.926013   42394 main.go:141] libmachine: Using SSH client type: native
	I0625 16:05:09.926214   42394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 16:05:09.926224   42394 main.go:141] libmachine: About to run SSH command:
	hostname
	I0625 16:05:10.040686   42394 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-674765
	
	I0625 16:05:10.040715   42394 main.go:141] libmachine: (ha-674765) Calling .GetMachineName
	I0625 16:05:10.040965   42394 buildroot.go:166] provisioning hostname "ha-674765"
	I0625 16:05:10.040989   42394 main.go:141] libmachine: (ha-674765) Calling .GetMachineName
	I0625 16:05:10.041210   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:05:10.043642   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.043961   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:05:10.043991   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.044126   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:05:10.044310   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:10.044470   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:10.044573   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:05:10.044749   42394 main.go:141] libmachine: Using SSH client type: native
	I0625 16:05:10.044910   42394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 16:05:10.044922   42394 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-674765 && echo "ha-674765" | sudo tee /etc/hostname
	I0625 16:05:10.165501   42394 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-674765
	
	I0625 16:05:10.165525   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:05:10.168115   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.168467   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:05:10.168497   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.168659   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:05:10.168829   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:10.168955   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:10.169089   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:05:10.169207   42394 main.go:141] libmachine: Using SSH client type: native
	I0625 16:05:10.169392   42394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 16:05:10.169408   42394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-674765' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-674765/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-674765' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 16:05:10.280769   42394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 16:05:10.280814   42394 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 16:05:10.280845   42394 buildroot.go:174] setting up certificates
	I0625 16:05:10.280853   42394 provision.go:84] configureAuth start
	I0625 16:05:10.280864   42394 main.go:141] libmachine: (ha-674765) Calling .GetMachineName
	I0625 16:05:10.281129   42394 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:05:10.283846   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.284168   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:05:10.284195   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.284332   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:05:10.286056   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.286376   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:05:10.286394   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.286558   42394 provision.go:143] copyHostCerts
	I0625 16:05:10.286588   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 16:05:10.286643   42394 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem, removing ...
	I0625 16:05:10.286654   42394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 16:05:10.286728   42394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 16:05:10.286822   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 16:05:10.286852   42394 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem, removing ...
	I0625 16:05:10.286863   42394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 16:05:10.286901   42394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 16:05:10.286967   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 16:05:10.286989   42394 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem, removing ...
	I0625 16:05:10.286995   42394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 16:05:10.287028   42394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 16:05:10.287098   42394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.ha-674765 san=[127.0.0.1 192.168.39.128 ha-674765 localhost minikube]
	I0625 16:05:10.610048   42394 provision.go:177] copyRemoteCerts
	I0625 16:05:10.610104   42394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 16:05:10.610128   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:05:10.612686   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.612995   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:05:10.613024   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.613219   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:05:10.613444   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:10.613576   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:05:10.613728   42394 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:05:10.701508   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0625 16:05:10.701582   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 16:05:10.729319   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0625 16:05:10.729388   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0625 16:05:10.757285   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0625 16:05:10.757368   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0625 16:05:10.782256   42394 provision.go:87] duration metric: took 501.388189ms to configureAuth
	I0625 16:05:10.782283   42394 buildroot.go:189] setting minikube options for container-runtime
	I0625 16:05:10.782550   42394 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:05:10.782658   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:05:10.785018   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.785514   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:05:10.785543   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.785646   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:05:10.785851   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:10.786007   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:10.786151   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:05:10.786285   42394 main.go:141] libmachine: Using SSH client type: native
	I0625 16:05:10.786463   42394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 16:05:10.786505   42394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 16:06:41.661731   42394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 16:06:41.661758   42394 machine.go:97] duration metric: took 1m31.739743418s to provisionDockerMachine
	I0625 16:06:41.661772   42394 start.go:293] postStartSetup for "ha-674765" (driver="kvm2")
	I0625 16:06:41.661786   42394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 16:06:41.661808   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:06:41.662122   42394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 16:06:41.662191   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:06:41.665074   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.665486   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:06:41.665518   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.665642   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:06:41.665819   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:06:41.665985   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:06:41.666131   42394 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:06:41.749633   42394 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 16:06:41.753985   42394 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 16:06:41.754006   42394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 16:06:41.754069   42394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 16:06:41.754144   42394 filesync.go:149] local asset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> 212392.pem in /etc/ssl/certs
	I0625 16:06:41.754155   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /etc/ssl/certs/212392.pem
	I0625 16:06:41.754234   42394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0625 16:06:41.763287   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:06:41.786264   42394 start.go:296] duration metric: took 124.481229ms for postStartSetup
	I0625 16:06:41.786297   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:06:41.786549   42394 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0625 16:06:41.786573   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:06:41.788681   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.788978   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:06:41.789006   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.789138   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:06:41.789299   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:06:41.789463   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:06:41.789597   42394 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	W0625 16:06:41.872420   42394 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0625 16:06:41.872441   42394 fix.go:56] duration metric: took 1m31.96976201s for fixHost
	I0625 16:06:41.872465   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:06:41.874807   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.875178   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:06:41.875199   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.875345   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:06:41.875513   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:06:41.875660   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:06:41.875794   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:06:41.875951   42394 main.go:141] libmachine: Using SSH client type: native
	I0625 16:06:41.876148   42394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 16:06:41.876160   42394 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0625 16:06:41.982782   42394 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719331601.945785731
	
	I0625 16:06:41.982807   42394 fix.go:216] guest clock: 1719331601.945785731
	I0625 16:06:41.982817   42394 fix.go:229] Guest: 2024-06-25 16:06:41.945785731 +0000 UTC Remote: 2024-06-25 16:06:41.872450956 +0000 UTC m=+92.088965672 (delta=73.334775ms)
	I0625 16:06:41.982849   42394 fix.go:200] guest clock delta is within tolerance: 73.334775ms
	I0625 16:06:41.982858   42394 start.go:83] releasing machines lock for "ha-674765", held for 1m32.080192997s
	I0625 16:06:41.982887   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:06:41.983141   42394 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:06:41.985489   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.985847   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:06:41.985873   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.986022   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:06:41.986495   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:06:41.986667   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:06:41.986725   42394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 16:06:41.986778   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:06:41.986843   42394 ssh_runner.go:195] Run: cat /version.json
	I0625 16:06:41.986864   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:06:41.989114   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.989131   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.989488   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:06:41.989513   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.989538   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:06:41.989554   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.989718   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:06:41.989722   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:06:41.989872   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:06:41.989925   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:06:41.990000   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:06:41.990060   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:06:41.990119   42394 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:06:41.990160   42394 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:06:42.075126   42394 ssh_runner.go:195] Run: systemctl --version
	I0625 16:06:42.100317   42394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 16:06:42.270994   42394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0625 16:06:42.276889   42394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 16:06:42.276947   42394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 16:06:42.286219   42394 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0625 16:06:42.286237   42394 start.go:494] detecting cgroup driver to use...
	I0625 16:06:42.286301   42394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 16:06:42.303209   42394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 16:06:42.317211   42394 docker.go:217] disabling cri-docker service (if available) ...
	I0625 16:06:42.317249   42394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 16:06:42.330574   42394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 16:06:42.343639   42394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 16:06:42.488289   42394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 16:06:42.632161   42394 docker.go:233] disabling docker service ...
	I0625 16:06:42.632220   42394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 16:06:42.649058   42394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 16:06:42.662269   42394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 16:06:42.805188   42394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 16:06:42.944795   42394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 16:06:42.958589   42394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 16:06:42.977288   42394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0625 16:06:42.977349   42394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:06:42.987568   42394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 16:06:42.987616   42394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:06:42.997392   42394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:06:43.007273   42394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:06:43.017239   42394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 16:06:43.027334   42394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:06:43.037256   42394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:06:43.048837   42394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:06:43.058485   42394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 16:06:43.067715   42394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 16:06:43.076430   42394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:06:43.215238   42394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 16:06:47.621064   42394 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.405797623s)
	I0625 16:06:47.621097   42394 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 16:06:47.621137   42394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 16:06:47.626000   42394 start.go:562] Will wait 60s for crictl version
	I0625 16:06:47.626033   42394 ssh_runner.go:195] Run: which crictl
	I0625 16:06:47.629820   42394 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 16:06:47.666102   42394 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 16:06:47.666162   42394 ssh_runner.go:195] Run: crio --version
	I0625 16:06:47.695046   42394 ssh_runner.go:195] Run: crio --version
	I0625 16:06:47.724143   42394 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0625 16:06:47.725373   42394 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:06:47.727754   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:47.728112   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:06:47.728136   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:47.728354   42394 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0625 16:06:47.732896   42394 kubeadm.go:877] updating cluster {Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0625 16:06:47.733074   42394 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 16:06:47.733133   42394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:06:47.778894   42394 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 16:06:47.778916   42394 crio.go:433] Images already preloaded, skipping extraction
	I0625 16:06:47.778966   42394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:06:47.811499   42394 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 16:06:47.811518   42394 cache_images.go:84] Images are preloaded, skipping loading
	I0625 16:06:47.811531   42394 kubeadm.go:928] updating node { 192.168.39.128 8443 v1.30.2 crio true true} ...
	I0625 16:06:47.811657   42394 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-674765 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 16:06:47.811764   42394 ssh_runner.go:195] Run: crio config
	I0625 16:06:47.857857   42394 cni.go:84] Creating CNI manager for ""
	I0625 16:06:47.857873   42394 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0625 16:06:47.857887   42394 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0625 16:06:47.857906   42394 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-674765 NodeName:ha-674765 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0625 16:06:47.858029   42394 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-674765"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0625 16:06:47.858046   42394 kube-vip.go:115] generating kube-vip config ...
	I0625 16:06:47.858082   42394 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0625 16:06:47.869688   42394 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0625 16:06:47.869770   42394 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0625 16:06:47.869812   42394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0625 16:06:47.878780   42394 binaries.go:44] Found k8s binaries, skipping transfer
	I0625 16:06:47.878839   42394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0625 16:06:47.887764   42394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0625 16:06:47.904389   42394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 16:06:47.920572   42394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0625 16:06:47.936468   42394 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0625 16:06:47.952990   42394 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0625 16:06:47.957290   42394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:06:48.098562   42394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 16:06:48.113331   42394 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765 for IP: 192.168.39.128
	I0625 16:06:48.113357   42394 certs.go:194] generating shared ca certs ...
	I0625 16:06:48.113377   42394 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:06:48.113527   42394 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 16:06:48.113579   42394 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 16:06:48.113593   42394 certs.go:256] generating profile certs ...
	I0625 16:06:48.113687   42394 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key
	I0625 16:06:48.113723   42394 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.4cb2e099
	I0625 16:06:48.113749   42394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.4cb2e099 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.53 192.168.39.77 192.168.39.254]
	I0625 16:06:48.207036   42394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.4cb2e099 ...
	I0625 16:06:48.207065   42394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.4cb2e099: {Name:mk0733bebf3f9051b8529571108dd2538df7993c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:06:48.207231   42394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.4cb2e099 ...
	I0625 16:06:48.207245   42394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.4cb2e099: {Name:mk8f24a82632e47ed049a4c94ea6a0986178e217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:06:48.207318   42394 certs.go:381] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.4cb2e099 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt
	I0625 16:06:48.207454   42394 certs.go:385] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.4cb2e099 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key
	I0625 16:06:48.207587   42394 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key
	I0625 16:06:48.207601   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0625 16:06:48.207614   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0625 16:06:48.207626   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0625 16:06:48.207639   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0625 16:06:48.207651   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0625 16:06:48.207663   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0625 16:06:48.207675   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0625 16:06:48.207686   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0625 16:06:48.207731   42394 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem (1338 bytes)
	W0625 16:06:48.207756   42394 certs.go:480] ignoring /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239_empty.pem, impossibly tiny 0 bytes
	I0625 16:06:48.207766   42394 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 16:06:48.207787   42394 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 16:06:48.207807   42394 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 16:06:48.207830   42394 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 16:06:48.207864   42394 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:06:48.207890   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:06:48.207903   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem -> /usr/share/ca-certificates/21239.pem
	I0625 16:06:48.207915   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /usr/share/ca-certificates/212392.pem
	I0625 16:06:48.208457   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 16:06:48.233191   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 16:06:48.256024   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 16:06:48.278992   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 16:06:48.302164   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0625 16:06:48.324402   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0625 16:06:48.347199   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 16:06:48.369959   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0625 16:06:48.392547   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 16:06:48.414855   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem --> /usr/share/ca-certificates/21239.pem (1338 bytes)
	I0625 16:06:48.437793   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /usr/share/ca-certificates/212392.pem (1708 bytes)
	I0625 16:06:48.460737   42394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0625 16:06:48.476664   42394 ssh_runner.go:195] Run: openssl version
	I0625 16:06:48.482526   42394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 16:06:48.493355   42394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:06:48.498067   42394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:06:48.498133   42394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:06:48.504113   42394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 16:06:48.513540   42394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21239.pem && ln -fs /usr/share/ca-certificates/21239.pem /etc/ssl/certs/21239.pem"
	I0625 16:06:48.524097   42394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21239.pem
	I0625 16:06:48.528662   42394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 16:06:48.528696   42394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21239.pem
	I0625 16:06:48.534440   42394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21239.pem /etc/ssl/certs/51391683.0"
	I0625 16:06:48.545113   42394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212392.pem && ln -fs /usr/share/ca-certificates/212392.pem /etc/ssl/certs/212392.pem"
	I0625 16:06:48.555854   42394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212392.pem
	I0625 16:06:48.560160   42394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 16:06:48.560202   42394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212392.pem
	I0625 16:06:48.565968   42394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/212392.pem /etc/ssl/certs/3ec20f2e.0"
	I0625 16:06:48.575602   42394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 16:06:48.580058   42394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0625 16:06:48.585491   42394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0625 16:06:48.591444   42394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0625 16:06:48.596827   42394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0625 16:06:48.602143   42394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0625 16:06:48.607441   42394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0625 16:06:48.612779   42394 kubeadm.go:391] StartCluster: {Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:06:48.612892   42394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0625 16:06:48.612922   42394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0625 16:06:48.650583   42394 cri.go:89] found id: "daf79ee8eb5658497e09cbd16752883ca88b8bfd2864ee00372d27eeb5806285"
	I0625 16:06:48.650605   42394 cri.go:89] found id: "fadbd7cdc44f4b5fab7ac7f2de7b57b27f6dab29aa6dba74bb989ef8265b7cd2"
	I0625 16:06:48.650610   42394 cri.go:89] found id: "1ec9f1864b5040c1de810ed7acdfe5a3f522fad6960bc9d5b6942aceabad78e1"
	I0625 16:06:48.650614   42394 cri.go:89] found id: "ee37c24ba30f73306a896f334f612e36909a30fe60cc981a14e0a33c613ee062"
	I0625 16:06:48.650618   42394 cri.go:89] found id: "ac8ac5af3896e66b7a766c2dee0a0ca88408fc6840949a3c60309e6d98f11fa1"
	I0625 16:06:48.650622   42394 cri.go:89] found id: "ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b"
	I0625 16:06:48.650639   42394 cri.go:89] found id: "5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8"
	I0625 16:06:48.650778   42394 cri.go:89] found id: "7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c"
	I0625 16:06:48.650785   42394 cri.go:89] found id: "c3ed8ce894547a7bc3deba857b5d7d733af8ba225cb579c469f090460bff27d3"
	I0625 16:06:48.650792   42394 cri.go:89] found id: "a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65"
	I0625 16:06:48.650796   42394 cri.go:89] found id: "9938e238e129cd0d797a5de776e0d7b756bc8f39188223f4151974b19fb7506c"
	I0625 16:06:48.650800   42394 cri.go:89] found id: "a40f818bed683af529089283a92813b3d87d93d9cb9290b6081645f3bced82fa"
	I0625 16:06:48.650804   42394 cri.go:89] found id: "e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32"
	I0625 16:06:48.650807   42394 cri.go:89] found id: ""
	I0625 16:06:48.650850   42394 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.344400157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331745344378271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37348436-3657-4cb6-9c1a-42b63d2a665c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.344959728Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0d773ba-fe29-422a-9a65-a5fc1a30bbd7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.345061658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0d773ba-fe29-422a-9a65-a5fc1a30bbd7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.345588010Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c3426ec1f55cfcf657863a2bb9d1d1ed319358c204b3013dc6ea1040ef44ede,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719331679606027881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62eb4fc54721a6cc41ca7c6a7e298bebe70e0bd709ee162f8002bcd99b09f69,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719331650600404866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91160cb26919875a6826440f019fdc7a29c4cb1cca9f728b8634425c5d0d0055,PodSandboxId:a74cac97870233d5be7e99d859054dbe59bb62b62dcebb9b93d53d7d97e6ff21,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331647935856429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5616a5da2347e48b25e5207a5d383a1b8395ebbdf7444bc962fbab867ccdb3e,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719331645702587343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c1834b73fff1174eded512f55252efedba82c00af1234c13e73a27f339b56c,PodSandboxId:4d0b2a1c727ceef717a7d10522e55f9b21763d10b548f6f8ca153b04c08a6ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719331628128137056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a01324348bb22c8ce03c490b59b42a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:993db242e335019216c340e823497dc2a88a83153badf0eecd3d96c454418fa2,PodSandboxId:2a19a6bbe06bb54deaff5da581b9b45fcd9494227983dd022d0426bb0ab3ccd9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719331614792099718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:5aa65b67e926f58e42af575f038a6429658821d33a89c7d8113e504ad3e6d174,PodSandboxId:c65aa084da236ba3d7ea0e7917b41dbcdeb30f405ebd8c15df8171e1500f95f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719331614912204262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{
Id:967eda17cca156722128d0041068e08832d6bb3264caea0bc5fd19be28bf6525,PodSandboxId:cb4f06a80c952a0b7022ea2ce0a18462d11c45e82ab40aa1a165b789f8a376ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614718978303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca99b81f8f3af37b3a11b2b7acc63b243a795b431477e675b66e1ee8e98320f2,PodSandboxId:b24964e6136b211047c159c805ba1b9d39cffa512722bad477a943783aa84d2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614646463810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd04da046aa1783311d35bbb47f916525d9abb5c660cc42d5a6bdebb5c66006,PodSandboxId:bc7e32fc0f4498733a23c791acb9f215a768d0753f5b4a856a40a6f127b6e5fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719331614552451117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302139a799fb523035e8a52ecadeecbc2fbc59026ad9ff69cbc5264b7192ee4d,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719331614417774935,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d8af94261757538ed64ce37812b4b63ab671c65b53859c558794dffd24a708,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719331614473721171,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5e
f04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e9ec15e9ff71c88b7a6fa2117facf91756b7926806742ce61de5689d4eb2a9a,PodSandboxId:22f98f821bc50e6c29db3ed17ae565a5aaa5284f725895f2745c392ce3d8c318,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719331614384125800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e
58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527ce3a2a7ca580efeb561b1d051220b75eabca666282f31d4b998998c5ae267,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719331610134727020,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]strin
g{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719331123602540696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{
io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982140261149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8b
f5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982105192113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c53
5741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719330979753078901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca
2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719330960414738558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Sta
te:CONTAINER_EXITED,CreatedAt:1719330960349486669,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0d773ba-fe29-422a-9a65-a5fc1a30bbd7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.405393954Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d439224-71cb-41d7-9b07-05f9a9a0e66f name=/runtime.v1.RuntimeService/Version
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.405491988Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d439224-71cb-41d7-9b07-05f9a9a0e66f name=/runtime.v1.RuntimeService/Version
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.406770059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5061320-06cc-4a1c-baa7-1e0f86d67b6c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.407441351Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331745407409543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5061320-06cc-4a1c-baa7-1e0f86d67b6c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.408014689Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20b8eade-f1c8-4479-b292-794b99e04914 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.408160892Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20b8eade-f1c8-4479-b292-794b99e04914 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.408551717Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c3426ec1f55cfcf657863a2bb9d1d1ed319358c204b3013dc6ea1040ef44ede,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719331679606027881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62eb4fc54721a6cc41ca7c6a7e298bebe70e0bd709ee162f8002bcd99b09f69,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719331650600404866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91160cb26919875a6826440f019fdc7a29c4cb1cca9f728b8634425c5d0d0055,PodSandboxId:a74cac97870233d5be7e99d859054dbe59bb62b62dcebb9b93d53d7d97e6ff21,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331647935856429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5616a5da2347e48b25e5207a5d383a1b8395ebbdf7444bc962fbab867ccdb3e,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719331645702587343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c1834b73fff1174eded512f55252efedba82c00af1234c13e73a27f339b56c,PodSandboxId:4d0b2a1c727ceef717a7d10522e55f9b21763d10b548f6f8ca153b04c08a6ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719331628128137056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a01324348bb22c8ce03c490b59b42a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:993db242e335019216c340e823497dc2a88a83153badf0eecd3d96c454418fa2,PodSandboxId:2a19a6bbe06bb54deaff5da581b9b45fcd9494227983dd022d0426bb0ab3ccd9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719331614792099718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:5aa65b67e926f58e42af575f038a6429658821d33a89c7d8113e504ad3e6d174,PodSandboxId:c65aa084da236ba3d7ea0e7917b41dbcdeb30f405ebd8c15df8171e1500f95f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719331614912204262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{
Id:967eda17cca156722128d0041068e08832d6bb3264caea0bc5fd19be28bf6525,PodSandboxId:cb4f06a80c952a0b7022ea2ce0a18462d11c45e82ab40aa1a165b789f8a376ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614718978303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca99b81f8f3af37b3a11b2b7acc63b243a795b431477e675b66e1ee8e98320f2,PodSandboxId:b24964e6136b211047c159c805ba1b9d39cffa512722bad477a943783aa84d2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614646463810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd04da046aa1783311d35bbb47f916525d9abb5c660cc42d5a6bdebb5c66006,PodSandboxId:bc7e32fc0f4498733a23c791acb9f215a768d0753f5b4a856a40a6f127b6e5fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719331614552451117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302139a799fb523035e8a52ecadeecbc2fbc59026ad9ff69cbc5264b7192ee4d,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719331614417774935,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d8af94261757538ed64ce37812b4b63ab671c65b53859c558794dffd24a708,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719331614473721171,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5e
f04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e9ec15e9ff71c88b7a6fa2117facf91756b7926806742ce61de5689d4eb2a9a,PodSandboxId:22f98f821bc50e6c29db3ed17ae565a5aaa5284f725895f2745c392ce3d8c318,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719331614384125800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e
58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527ce3a2a7ca580efeb561b1d051220b75eabca666282f31d4b998998c5ae267,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719331610134727020,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]strin
g{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719331123602540696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{
io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982140261149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8b
f5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982105192113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c53
5741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719330979753078901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca
2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719330960414738558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Sta
te:CONTAINER_EXITED,CreatedAt:1719330960349486669,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20b8eade-f1c8-4479-b292-794b99e04914 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.466597825Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0af1ab7-3930-4a49-a15a-6c1240627e1c name=/runtime.v1.RuntimeService/Version
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.466690991Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0af1ab7-3930-4a49-a15a-6c1240627e1c name=/runtime.v1.RuntimeService/Version
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.468234009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2080c0c-0445-46d9-8b78-33016beacae4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.468769820Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331745468747150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2080c0c-0445-46d9-8b78-33016beacae4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.469505185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aee0fd18-09ff-49e2-af9b-f9ffbf2b8fd5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.469596699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aee0fd18-09ff-49e2-af9b-f9ffbf2b8fd5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.470076938Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c3426ec1f55cfcf657863a2bb9d1d1ed319358c204b3013dc6ea1040ef44ede,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719331679606027881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62eb4fc54721a6cc41ca7c6a7e298bebe70e0bd709ee162f8002bcd99b09f69,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719331650600404866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91160cb26919875a6826440f019fdc7a29c4cb1cca9f728b8634425c5d0d0055,PodSandboxId:a74cac97870233d5be7e99d859054dbe59bb62b62dcebb9b93d53d7d97e6ff21,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331647935856429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5616a5da2347e48b25e5207a5d383a1b8395ebbdf7444bc962fbab867ccdb3e,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719331645702587343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c1834b73fff1174eded512f55252efedba82c00af1234c13e73a27f339b56c,PodSandboxId:4d0b2a1c727ceef717a7d10522e55f9b21763d10b548f6f8ca153b04c08a6ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719331628128137056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a01324348bb22c8ce03c490b59b42a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:993db242e335019216c340e823497dc2a88a83153badf0eecd3d96c454418fa2,PodSandboxId:2a19a6bbe06bb54deaff5da581b9b45fcd9494227983dd022d0426bb0ab3ccd9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719331614792099718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:5aa65b67e926f58e42af575f038a6429658821d33a89c7d8113e504ad3e6d174,PodSandboxId:c65aa084da236ba3d7ea0e7917b41dbcdeb30f405ebd8c15df8171e1500f95f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719331614912204262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{
Id:967eda17cca156722128d0041068e08832d6bb3264caea0bc5fd19be28bf6525,PodSandboxId:cb4f06a80c952a0b7022ea2ce0a18462d11c45e82ab40aa1a165b789f8a376ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614718978303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca99b81f8f3af37b3a11b2b7acc63b243a795b431477e675b66e1ee8e98320f2,PodSandboxId:b24964e6136b211047c159c805ba1b9d39cffa512722bad477a943783aa84d2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614646463810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd04da046aa1783311d35bbb47f916525d9abb5c660cc42d5a6bdebb5c66006,PodSandboxId:bc7e32fc0f4498733a23c791acb9f215a768d0753f5b4a856a40a6f127b6e5fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719331614552451117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302139a799fb523035e8a52ecadeecbc2fbc59026ad9ff69cbc5264b7192ee4d,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719331614417774935,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d8af94261757538ed64ce37812b4b63ab671c65b53859c558794dffd24a708,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719331614473721171,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5e
f04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e9ec15e9ff71c88b7a6fa2117facf91756b7926806742ce61de5689d4eb2a9a,PodSandboxId:22f98f821bc50e6c29db3ed17ae565a5aaa5284f725895f2745c392ce3d8c318,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719331614384125800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e
58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527ce3a2a7ca580efeb561b1d051220b75eabca666282f31d4b998998c5ae267,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719331610134727020,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]strin
g{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719331123602540696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{
io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982140261149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8b
f5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982105192113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c53
5741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719330979753078901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca
2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719330960414738558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Sta
te:CONTAINER_EXITED,CreatedAt:1719330960349486669,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aee0fd18-09ff-49e2-af9b-f9ffbf2b8fd5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.528187112Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e677b3ce-3f4c-429e-9786-64e2061cf59b name=/runtime.v1.RuntimeService/Version
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.528418522Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e677b3ce-3f4c-429e-9786-64e2061cf59b name=/runtime.v1.RuntimeService/Version
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.529803557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d647ea1-9860-4ce6-a2a9-9e73c6a670dc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.530511962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331745530443489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d647ea1-9860-4ce6-a2a9-9e73c6a670dc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.531351917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4e2427b-6dc4-42db-b161-405375641e95 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.531419008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4e2427b-6dc4-42db-b161-405375641e95 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:09:05 ha-674765 crio[3936]: time="2024-06-25 16:09:05.531815697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c3426ec1f55cfcf657863a2bb9d1d1ed319358c204b3013dc6ea1040ef44ede,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719331679606027881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62eb4fc54721a6cc41ca7c6a7e298bebe70e0bd709ee162f8002bcd99b09f69,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719331650600404866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91160cb26919875a6826440f019fdc7a29c4cb1cca9f728b8634425c5d0d0055,PodSandboxId:a74cac97870233d5be7e99d859054dbe59bb62b62dcebb9b93d53d7d97e6ff21,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331647935856429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5616a5da2347e48b25e5207a5d383a1b8395ebbdf7444bc962fbab867ccdb3e,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719331645702587343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c1834b73fff1174eded512f55252efedba82c00af1234c13e73a27f339b56c,PodSandboxId:4d0b2a1c727ceef717a7d10522e55f9b21763d10b548f6f8ca153b04c08a6ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719331628128137056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a01324348bb22c8ce03c490b59b42a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:993db242e335019216c340e823497dc2a88a83153badf0eecd3d96c454418fa2,PodSandboxId:2a19a6bbe06bb54deaff5da581b9b45fcd9494227983dd022d0426bb0ab3ccd9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719331614792099718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:5aa65b67e926f58e42af575f038a6429658821d33a89c7d8113e504ad3e6d174,PodSandboxId:c65aa084da236ba3d7ea0e7917b41dbcdeb30f405ebd8c15df8171e1500f95f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719331614912204262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{
Id:967eda17cca156722128d0041068e08832d6bb3264caea0bc5fd19be28bf6525,PodSandboxId:cb4f06a80c952a0b7022ea2ce0a18462d11c45e82ab40aa1a165b789f8a376ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614718978303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca99b81f8f3af37b3a11b2b7acc63b243a795b431477e675b66e1ee8e98320f2,PodSandboxId:b24964e6136b211047c159c805ba1b9d39cffa512722bad477a943783aa84d2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614646463810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd04da046aa1783311d35bbb47f916525d9abb5c660cc42d5a6bdebb5c66006,PodSandboxId:bc7e32fc0f4498733a23c791acb9f215a768d0753f5b4a856a40a6f127b6e5fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719331614552451117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302139a799fb523035e8a52ecadeecbc2fbc59026ad9ff69cbc5264b7192ee4d,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719331614417774935,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d8af94261757538ed64ce37812b4b63ab671c65b53859c558794dffd24a708,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719331614473721171,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5e
f04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e9ec15e9ff71c88b7a6fa2117facf91756b7926806742ce61de5689d4eb2a9a,PodSandboxId:22f98f821bc50e6c29db3ed17ae565a5aaa5284f725895f2745c392ce3d8c318,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719331614384125800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e
58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527ce3a2a7ca580efeb561b1d051220b75eabca666282f31d4b998998c5ae267,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719331610134727020,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]strin
g{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719331123602540696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{
io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982140261149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8b
f5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982105192113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c53
5741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719330979753078901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca
2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719330960414738558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Sta
te:CONTAINER_EXITED,CreatedAt:1719330960349486669,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4e2427b-6dc4-42db-b161-405375641e95 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0c3426ec1f55c       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               3                   c1c292c76f645       kindnet-ntq77
	b62eb4fc54721       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      About a minute ago   Running             kube-controller-manager   2                   118263dc5302b       kube-controller-manager-ha-674765
	91160cb269198       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   a74cac9787023       busybox-fc5497c4f-qjw4r
	f5616a5da2347       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      About a minute ago   Running             kube-apiserver            3                   95eb77c224efb       kube-apiserver-ha-674765
	81c1834b73fff       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      About a minute ago   Running             kube-vip                  0                   4d0b2a1c727ce       kube-vip-ha-674765
	5aa65b67e926f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       5                   c65aa084da236       storage-provisioner
	993db242e3350       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      2 minutes ago        Running             kube-proxy                1                   2a19a6bbe06bb       kube-proxy-rh9n5
	967eda17cca15       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   cb4f06a80c952       coredns-7db6d8ff4d-84zkt
	ca99b81f8f3af       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   b24964e6136b2       coredns-7db6d8ff4d-28db5
	ccd04da046aa1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   bc7e32fc0f449       etcd-ha-674765
	d9d8af9426175       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      2 minutes ago        Exited              kube-apiserver            2                   95eb77c224efb       kube-apiserver-ha-674765
	302139a799fb5       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      2 minutes ago        Exited              kube-controller-manager   1                   118263dc5302b       kube-controller-manager-ha-674765
	3e9ec15e9ff71       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      2 minutes ago        Running             kube-scheduler            1                   22f98f821bc50       kube-scheduler-ha-674765
	527ce3a2a7ca5       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      2 minutes ago        Exited              kindnet-cni               2                   c1c292c76f645       kindnet-ntq77
	dd7837c56cda3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   d18f421cdb437       busybox-fc5497c4f-qjw4r
	ec00b1016861e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   2249d5de30294       coredns-7db6d8ff4d-84zkt
	5dff3834f63a3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   36a6cd372769c       coredns-7db6d8ff4d-28db5
	7cea2f95fa7a7       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      12 minutes ago       Exited              kube-proxy                0                   41bb01e505abe       kube-proxy-rh9n5
	a7ed432b8fb61       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      13 minutes ago       Exited              kube-scheduler            0                   3498fabc6b53a       kube-scheduler-ha-674765
	e903f61a215f1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   4695ac9edbc50       etcd-ha-674765
	
	
	==> coredns [5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8] <==
	[INFO] 10.244.0.4:40292 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069393s
	[INFO] 10.244.0.4:47923 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008723s
	[INFO] 10.244.2.2:43607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173082s
	[INFO] 10.244.2.2:58140 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152475s
	[INFO] 10.244.2.2:58321 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00137128s
	[INFO] 10.244.2.2:51827 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149446s
	[INFO] 10.244.1.2:53516 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091184s
	[INFO] 10.244.1.2:50837 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111518s
	[INFO] 10.244.0.4:36638 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096918s
	[INFO] 10.244.0.4:34420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062938s
	[INFO] 10.244.2.2:47727 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109009s
	[INFO] 10.244.2.2:53547 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114146s
	[INFO] 10.244.2.2:52427 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103325s
	[INFO] 10.244.0.4:35396 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015274s
	[INFO] 10.244.0.4:37070 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000162346s
	[INFO] 10.244.0.4:34499 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000181932s
	[INFO] 10.244.2.2:39406 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141568s
	[INFO] 10.244.2.2:45012 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125003s
	[INFO] 10.244.2.2:37480 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111741s
	[INFO] 10.244.2.2:38163 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160497s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [967eda17cca156722128d0041068e08832d6bb3264caea0bc5fd19be28bf6525] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1035445682]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:07:02.458) (total time: 10001ms):
	Trace[1035445682]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:07:12.460)
	Trace[1035445682]: [10.001587684s] [10.001587684s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34822->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[131637814]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:07:06.222) (total time: 12359ms):
	Trace[131637814]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34822->10.96.0.1:443: read: connection reset by peer 12359ms (16:07:18.581)
	Trace[131637814]: [12.359266363s] [12.359266363s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34822->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40452->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40452->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ca99b81f8f3af37b3a11b2b7acc63b243a795b431477e675b66e1ee8e98320f2] <==
	Trace[2134462864]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (16:07:08.615)
	Trace[2134462864]: [10.00102088s] [10.00102088s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[950250650]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:06:58.656) (total time: 10000ms):
	Trace[950250650]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (16:07:08.656)
	Trace[950250650]: [10.000769831s] [10.000769831s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46754->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[469272890]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:07:06.362) (total time: 12219ms):
	Trace[469272890]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46754->10.96.0.1:443: read: connection reset by peer 12218ms (16:07:18.581)
	Trace[469272890]: [12.219041212s] [12.219041212s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46754->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b] <==
	[INFO] 10.244.1.2:57875 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221098s
	[INFO] 10.244.1.2:50144 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003188543s
	[INFO] 10.244.1.2:52779 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142142s
	[INFO] 10.244.0.4:54632 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118741s
	[INFO] 10.244.0.4:42979 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001269082s
	[INFO] 10.244.0.4:36713 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084451s
	[INFO] 10.244.2.2:41583 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001597985s
	[INFO] 10.244.2.2:38518 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007901s
	[INFO] 10.244.2.2:36859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163343s
	[INFO] 10.244.2.2:48049 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012051s
	[INFO] 10.244.1.2:41596 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099989s
	[INFO] 10.244.1.2:53657 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152026s
	[INFO] 10.244.0.4:37328 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010546s
	[INFO] 10.244.0.4:37107 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078111s
	[INFO] 10.244.2.2:58260 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109644s
	[INFO] 10.244.1.2:51838 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161138s
	[INFO] 10.244.1.2:34544 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000245952s
	[INFO] 10.244.1.2:41848 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133045s
	[INFO] 10.244.1.2:55838 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000180767s
	[INFO] 10.244.0.4:56384 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068132s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-674765
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_25T15_56_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:56:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:08:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 16:07:33 +0000   Tue, 25 Jun 2024 15:56:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 16:07:33 +0000   Tue, 25 Jun 2024 15:56:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 16:07:33 +0000   Tue, 25 Jun 2024 15:56:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 16:07:33 +0000   Tue, 25 Jun 2024 15:56:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-674765
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9f74a4b042742c8a0ef29e697c6459c
	  System UUID:                b9f74a4b-0427-42c8-a0ef-29e697c6459c
	  Boot ID:                    52ea2189-696e-4985-bf6b-90448e3e85aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qjw4r              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-28db5             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 coredns-7db6d8ff4d-84zkt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-ha-674765                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-ntq77                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-674765             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-674765    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-rh9n5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-674765             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-674765                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 87s                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-674765 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-674765 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-674765 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     12m                kubelet          Node ha-674765 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node ha-674765 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node ha-674765 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Normal   NodeReady                12m                kubelet          Node ha-674765 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Warning  ContainerGCFailed        2m59s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           84s                node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Normal   RegisteredNode           83s                node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Normal   RegisteredNode           31s                node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	
	
	Name:               ha-674765-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_25T15_57_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:57:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:09:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 16:08:10 +0000   Tue, 25 Jun 2024 16:07:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 16:08:10 +0000   Tue, 25 Jun 2024 16:07:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 16:08:10 +0000   Tue, 25 Jun 2024 16:07:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 16:08:10 +0000   Tue, 25 Jun 2024 16:07:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-674765-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 45ee8176fa3149fdb7e4bac2256c26b7
	  System UUID:                45ee8176-fa31-49fd-b7e4-bac2256c26b7
	  Boot ID:                    1e188258-37ae-4518-b693-d29f05e5ab3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jx6j4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-674765-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-kkgdq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-674765-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-674765-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-lsmft                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-674765-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-674765-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 11m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node ha-674765-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node ha-674765-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)    kubelet          Node ha-674765-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                  node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  NodeNotReady             8m41s                node-controller  Node ha-674765-m02 status is now: NodeNotReady
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)  kubelet          Node ha-674765-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet          Node ha-674765-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x7 over 117s)  kubelet          Node ha-674765-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  117s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           85s                  node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  RegisteredNode           84s                  node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  RegisteredNode           32s                  node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	
	
	Name:               ha-674765-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_25T15_58_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:58:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:08:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 16:08:36 +0000   Tue, 25 Jun 2024 15:58:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 16:08:36 +0000   Tue, 25 Jun 2024 15:58:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 16:08:36 +0000   Tue, 25 Jun 2024 15:58:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 16:08:36 +0000   Tue, 25 Jun 2024 15:58:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.77
	  Hostname:    ha-674765-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 82d78f3bf896447aa83d147c6be1d104
	  System UUID:                82d78f3b-f896-447a-a83d-147c6be1d104
	  Boot ID:                    bc4cdf02-db61-43cd-9e12-3e0f1decbefc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vn65x                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-674765-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-px4dn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-674765-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-674765-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-swfsx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-674765-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-674765-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 43s                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-674765-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-674765-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-674765-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-674765-m03 event: Registered Node ha-674765-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-674765-m03 event: Registered Node ha-674765-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-674765-m03 event: Registered Node ha-674765-m03 in Controller
	  Normal   RegisteredNode           85s                node-controller  Node ha-674765-m03 event: Registered Node ha-674765-m03 in Controller
	  Normal   RegisteredNode           84s                node-controller  Node ha-674765-m03 event: Registered Node ha-674765-m03 in Controller
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node ha-674765-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node ha-674765-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node ha-674765-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 61s                kubelet          Node ha-674765-m03 has been rebooted, boot id: bc4cdf02-db61-43cd-9e12-3e0f1decbefc
	  Normal   RegisteredNode           32s                node-controller  Node ha-674765-m03 event: Registered Node ha-674765-m03 in Controller
	
	
	Name:               ha-674765-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_25T15_59_18_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:59:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:08:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 16:08:58 +0000   Tue, 25 Jun 2024 16:08:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 16:08:58 +0000   Tue, 25 Jun 2024 16:08:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 16:08:58 +0000   Tue, 25 Jun 2024 16:08:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 16:08:58 +0000   Tue, 25 Jun 2024 16:08:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-674765-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 153487087a1a4805965ecc96230ab164
	  System UUID:                15348708-7a1a-4805-965e-cc96230ab164
	  Boot ID:                    d3c3f5a1-b9d4-4e7b-99f0-e8e93bd038b0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6z24k       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m49s
	  kube-system                 kube-proxy-szzwh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m42s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m49s (x2 over 9m49s)  kubelet          Node ha-674765-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m49s (x2 over 9m49s)  kubelet          Node ha-674765-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m49s (x2 over 9m49s)  kubelet          Node ha-674765-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m48s                  node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal   RegisteredNode           9m46s                  node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal   RegisteredNode           9m44s                  node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal   NodeReady                9m38s                  kubelet          Node ha-674765-m04 status is now: NodeReady
	  Normal   RegisteredNode           85s                    node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal   RegisteredNode           84s                    node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal   NodeNotReady             45s                    node-controller  Node ha-674765-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           32s                    node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal   Starting                 8s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)        kubelet          Node ha-674765-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)        kubelet          Node ha-674765-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)        kubelet          Node ha-674765-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                     kubelet          Node ha-674765-m04 has been rebooted, boot id: d3c3f5a1-b9d4-4e7b-99f0-e8e93bd038b0
	  Normal   NodeReady                8s                     kubelet          Node ha-674765-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[ +10.515677] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.054245] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062657] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.163326] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.122319] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.250574] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.069829] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +3.840914] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.060181] kauditd_printk_skb: 158 callbacks suppressed
	[Jun25 15:56] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +0.085967] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.321595] kauditd_printk_skb: 21 callbacks suppressed
	[Jun25 15:57] kauditd_printk_skb: 74 callbacks suppressed
	[Jun25 16:03] kauditd_printk_skb: 1 callbacks suppressed
	[Jun25 16:06] systemd-fstab-generator[3857]: Ignoring "noauto" option for root device
	[  +0.148239] systemd-fstab-generator[3869]: Ignoring "noauto" option for root device
	[  +0.171292] systemd-fstab-generator[3883]: Ignoring "noauto" option for root device
	[  +0.147208] systemd-fstab-generator[3895]: Ignoring "noauto" option for root device
	[  +0.265636] systemd-fstab-generator[3923]: Ignoring "noauto" option for root device
	[  +4.883560] systemd-fstab-generator[4024]: Ignoring "noauto" option for root device
	[  +0.083360] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.897496] kauditd_printk_skb: 22 callbacks suppressed
	[Jun25 16:07] kauditd_printk_skb: 87 callbacks suppressed
	[ +10.060166] kauditd_printk_skb: 2 callbacks suppressed
	[ +22.057807] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [ccd04da046aa1783311d35bbb47f916525d9abb5c660cc42d5a6bdebb5c66006] <==
	{"level":"warn","ts":"2024-06-25T16:08:00.726961Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"861dd526078d031b","rtt":"0s","error":"dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:00.727003Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"861dd526078d031b","rtt":"0s","error":"dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:01.637218Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.77:2380/version","remote-member-id":"861dd526078d031b","error":"Get \"https://192.168.39.77:2380/version\": dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:01.637289Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"861dd526078d031b","error":"Get \"https://192.168.39.77:2380/version\": dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:05.638953Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.77:2380/version","remote-member-id":"861dd526078d031b","error":"Get \"https://192.168.39.77:2380/version\": dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:05.639026Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"861dd526078d031b","error":"Get \"https://192.168.39.77:2380/version\": dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:05.727976Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"861dd526078d031b","rtt":"0s","error":"dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:05.728184Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"861dd526078d031b","rtt":"0s","error":"dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:09.641148Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.77:2380/version","remote-member-id":"861dd526078d031b","error":"Get \"https://192.168.39.77:2380/version\": dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:09.641209Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"861dd526078d031b","error":"Get \"https://192.168.39.77:2380/version\": dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:10.728714Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"861dd526078d031b","rtt":"0s","error":"dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:10.728822Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"861dd526078d031b","rtt":"0s","error":"dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:13.642654Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.77:2380/version","remote-member-id":"861dd526078d031b","error":"Get \"https://192.168.39.77:2380/version\": dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:13.642696Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"861dd526078d031b","error":"Get \"https://192.168.39.77:2380/version\": dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:15.729441Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"861dd526078d031b","rtt":"0s","error":"dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-25T16:08:15.7297Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"861dd526078d031b","rtt":"0s","error":"dial tcp 192.168.39.77:2380: connect: connection refused"}
	{"level":"info","ts":"2024-06-25T16:08:17.175533Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:08:17.175673Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:08:17.178162Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:08:17.203712Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fa515506e66f6916","to":"861dd526078d031b","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-25T16:08:17.203815Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:08:17.204645Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fa515506e66f6916","to":"861dd526078d031b","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-25T16:08:17.20474Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"warn","ts":"2024-06-25T16:08:21.511665Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"ce369a7c509ac3e5","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"48.405328ms"}
	{"level":"warn","ts":"2024-06-25T16:08:21.511933Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"861dd526078d031b","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"48.682513ms"}
	
	
	==> etcd [e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32] <==
	2024/06/25 16:05:10 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/06/25 16:05:10 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/06/25 16:05:10 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/06/25 16:05:10 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/06/25 16:05:10 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-25T16:05:11.002937Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.128:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-25T16:05:11.003102Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.128:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-25T16:05:11.003185Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fa515506e66f6916","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-06-25T16:05:11.003346Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ce369a7c509ac3e5"}
	{"level":"info","ts":"2024-06-25T16:05:11.003412Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ce369a7c509ac3e5"}
	{"level":"info","ts":"2024-06-25T16:05:11.003539Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ce369a7c509ac3e5"}
	{"level":"info","ts":"2024-06-25T16:05:11.003747Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5"}
	{"level":"info","ts":"2024-06-25T16:05:11.003805Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5"}
	{"level":"info","ts":"2024-06-25T16:05:11.003858Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5"}
	{"level":"info","ts":"2024-06-25T16:05:11.003931Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ce369a7c509ac3e5"}
	{"level":"info","ts":"2024-06-25T16:05:11.003944Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:05:11.003957Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:05:11.00399Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:05:11.004128Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:05:11.004173Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:05:11.004217Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:05:11.004244Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:05:11.006807Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.128:2380"}
	{"level":"info","ts":"2024-06-25T16:05:11.006967Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.128:2380"}
	{"level":"info","ts":"2024-06-25T16:05:11.006994Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-674765","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.128:2380"],"advertise-client-urls":["https://192.168.39.128:2379"]}
	
	
	==> kernel <==
	 16:09:06 up 13 min,  0 users,  load average: 0.16, 0.25, 0.17
	Linux ha-674765 5.10.207 #1 SMP Mon Jun 24 21:03:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0c3426ec1f55cfcf657863a2bb9d1d1ed319358c204b3013dc6ea1040ef44ede] <==
	I0625 16:08:30.553029       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	I0625 16:08:40.569077       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0625 16:08:40.569251       1 main.go:227] handling current node
	I0625 16:08:40.569297       1 main.go:223] Handling node with IPs: map[192.168.39.53:{}]
	I0625 16:08:40.569316       1 main.go:250] Node ha-674765-m02 has CIDR [10.244.1.0/24] 
	I0625 16:08:40.569476       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I0625 16:08:40.569498       1 main.go:250] Node ha-674765-m03 has CIDR [10.244.2.0/24] 
	I0625 16:08:40.569582       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0625 16:08:40.569605       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	I0625 16:08:50.582160       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0625 16:08:50.582266       1 main.go:227] handling current node
	I0625 16:08:50.582313       1 main.go:223] Handling node with IPs: map[192.168.39.53:{}]
	I0625 16:08:50.582353       1 main.go:250] Node ha-674765-m02 has CIDR [10.244.1.0/24] 
	I0625 16:08:50.582656       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I0625 16:08:50.582700       1 main.go:250] Node ha-674765-m03 has CIDR [10.244.2.0/24] 
	I0625 16:08:50.582775       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0625 16:08:50.582794       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	I0625 16:09:00.600979       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0625 16:09:00.601083       1 main.go:227] handling current node
	I0625 16:09:00.601115       1 main.go:223] Handling node with IPs: map[192.168.39.53:{}]
	I0625 16:09:00.601137       1 main.go:250] Node ha-674765-m02 has CIDR [10.244.1.0/24] 
	I0625 16:09:00.601266       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I0625 16:09:00.601291       1 main.go:250] Node ha-674765-m03 has CIDR [10.244.2.0/24] 
	I0625 16:09:00.601379       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0625 16:09:00.601404       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [527ce3a2a7ca580efeb561b1d051220b75eabca666282f31d4b998998c5ae267] <==
	I0625 16:06:50.598180       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0625 16:06:50.598248       1 main.go:107] hostIP = 192.168.39.128
	podIP = 192.168.39.128
	I0625 16:06:50.598427       1 main.go:116] setting mtu 1500 for CNI 
	I0625 16:06:50.598478       1 main.go:146] kindnetd IP family: "ipv4"
	I0625 16:06:50.598503       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0625 16:06:50.900242       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0625 16:06:54.005171       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0625 16:06:57.077342       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0625 16:07:09.087572       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0625 16:07:12.437313       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kube-apiserver [d9d8af94261757538ed64ce37812b4b63ab671c65b53859c558794dffd24a708] <==
	I0625 16:06:55.111328       1 options.go:221] external host was not specified, using 192.168.39.128
	I0625 16:06:55.112616       1 server.go:148] Version: v1.30.2
	I0625 16:06:55.112642       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:06:55.449297       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0625 16:06:55.464953       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0625 16:06:55.484409       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0625 16:06:55.484450       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0625 16:06:55.484686       1 instance.go:299] Using reconciler: lease
	W0625 16:07:15.448619       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0625 16:07:15.448621       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0625 16:07:15.485841       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f5616a5da2347e48b25e5207a5d383a1b8395ebbdf7444bc962fbab867ccdb3e] <==
	I0625 16:07:27.635139       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0625 16:07:27.635190       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0625 16:07:27.704119       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0625 16:07:27.720516       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0625 16:07:27.720566       1 policy_source.go:224] refreshing policies
	I0625 16:07:27.732945       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0625 16:07:27.733489       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0625 16:07:27.734081       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0625 16:07:27.734154       1 aggregator.go:165] initial CRD sync complete...
	I0625 16:07:27.734190       1 autoregister_controller.go:141] Starting autoregister controller
	I0625 16:07:27.734196       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0625 16:07:27.734201       1 cache.go:39] Caches are synced for autoregister controller
	I0625 16:07:27.737179       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0625 16:07:27.737211       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0625 16:07:27.737272       1 shared_informer.go:320] Caches are synced for configmaps
	I0625 16:07:27.737313       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0625 16:07:27.744159       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0625 16:07:27.762741       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.77]
	I0625 16:07:27.764433       1 controller.go:615] quota admission added evaluator for: endpoints
	I0625 16:07:27.784024       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0625 16:07:27.795817       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0625 16:07:27.807760       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0625 16:07:28.643442       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0625 16:07:29.029672       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.128 192.168.39.77]
	W0625 16:07:49.026748       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.128 192.168.39.53]
	
	
	==> kube-controller-manager [302139a799fb523035e8a52ecadeecbc2fbc59026ad9ff69cbc5264b7192ee4d] <==
	I0625 16:06:55.835731       1 serving.go:380] Generated self-signed cert in-memory
	I0625 16:06:56.077772       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0625 16:06:56.077959       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:06:56.079679       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0625 16:06:56.080227       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0625 16:06:56.080530       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0625 16:06:56.081092       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0625 16:07:16.492188       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.128:8443/healthz\": dial tcp 192.168.39.128:8443: connect: connection refused"
	
	
	==> kube-controller-manager [b62eb4fc54721a6cc41ca7c6a7e298bebe70e0bd709ee162f8002bcd99b09f69] <==
	I0625 16:07:42.250708       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0625 16:07:42.253057       1 shared_informer.go:320] Caches are synced for PVC protection
	I0625 16:07:42.255683       1 shared_informer.go:320] Caches are synced for HPA
	I0625 16:07:42.256952       1 shared_informer.go:320] Caches are synced for TTL
	I0625 16:07:42.261220       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0625 16:07:42.262843       1 shared_informer.go:320] Caches are synced for attach detach
	I0625 16:07:42.304743       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0625 16:07:42.341958       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0625 16:07:42.348345       1 shared_informer.go:320] Caches are synced for cronjob
	I0625 16:07:42.367005       1 shared_informer.go:320] Caches are synced for disruption
	I0625 16:07:42.404665       1 shared_informer.go:320] Caches are synced for resource quota
	I0625 16:07:42.409076       1 shared_informer.go:320] Caches are synced for job
	I0625 16:07:42.425548       1 shared_informer.go:320] Caches are synced for resource quota
	I0625 16:07:42.869577       1 shared_informer.go:320] Caches are synced for garbage collector
	I0625 16:07:42.913347       1 shared_informer.go:320] Caches are synced for garbage collector
	I0625 16:07:42.913423       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0625 16:07:51.862809       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-p7svw EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-p7svw\": the object has been modified; please apply your changes to the latest version and try again"
	I0625 16:07:51.863121       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bfa70875-b690-4c96-9121-8273f9c838bf", APIVersion:"v1", ResourceVersion:"297", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-p7svw EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-p7svw": the object has been modified; please apply your changes to the latest version and try again
	I0625 16:07:51.883353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.721489ms"
	I0625 16:07:51.883461       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.58µs"
	I0625 16:08:06.538938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.373646ms"
	I0625 16:08:06.539174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.689µs"
	I0625 16:08:24.683367       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.210599ms"
	I0625 16:08:24.684076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.559µs"
	I0625 16:08:58.381343       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-674765-m04"
	
	
	==> kube-proxy [7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c] <==
	E0625 16:03:49.109623       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-674765&resourceVersion=1748": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:03:49.109413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:03:49.109777       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:03:49.109479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:03:49.109832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:03:57.365328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:03:57.365406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:03:57.365484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-674765&resourceVersion=1748": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:03:57.365588       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-674765&resourceVersion=1748": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:03:57.365482       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:03:57.365737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:04:06.774943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-674765&resourceVersion=1748": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:04:06.775074       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-674765&resourceVersion=1748": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:04:09.846658       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:04:09.847254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:04:09.847085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:04:09.847431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:04:22.133410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-674765&resourceVersion=1748": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:04:22.133536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-674765&resourceVersion=1748": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:04:28.278763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:04:28.279464       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:04:28.278853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:04:28.279689       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:05:05.141391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:05:05.141505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [993db242e335019216c340e823497dc2a88a83153badf0eecd3d96c454418fa2] <==
	I0625 16:06:56.176637       1 server_linux.go:69] "Using iptables proxy"
	E0625 16:06:58.807608       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-674765\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0625 16:07:01.877269       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-674765\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0625 16:07:04.950072       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-674765\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0625 16:07:11.094801       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-674765\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0625 16:07:20.309851       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-674765\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0625 16:07:38.102216       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.128"]
	I0625 16:07:38.148167       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0625 16:07:38.148247       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0625 16:07:38.148264       1 server_linux.go:165] "Using iptables Proxier"
	I0625 16:07:38.151177       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0625 16:07:38.151423       1 server.go:872] "Version info" version="v1.30.2"
	I0625 16:07:38.151449       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:07:38.152684       1 config.go:192] "Starting service config controller"
	I0625 16:07:38.152744       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0625 16:07:38.152774       1 config.go:101] "Starting endpoint slice config controller"
	I0625 16:07:38.152796       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0625 16:07:38.153419       1 config.go:319] "Starting node config controller"
	I0625 16:07:38.153448       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0625 16:07:38.253939       1 shared_informer.go:320] Caches are synced for node config
	I0625 16:07:38.253986       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0625 16:07:38.254044       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [3e9ec15e9ff71c88b7a6fa2117facf91756b7926806742ce61de5689d4eb2a9a] <==
	W0625 16:07:23.785507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.128:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:23.785582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.128:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:23.898481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.128:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:23.898583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.128:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:24.144415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.128:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:24.144481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.128:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:24.277096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.128:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:24.277175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.128:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:24.303232       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.128:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:24.303313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.128:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:24.759105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:24.759169       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:24.789029       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:24.789084       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:24.919397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.128:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:24.919506       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.128:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:25.133752       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.128:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:25.133849       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.128:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:25.148528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:25.148616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:25.374498       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:25.374593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:25.411455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.128:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:25.411578       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.128:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	I0625 16:07:27.697340       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65] <==
	W0625 16:05:06.844080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0625 16:05:06.844118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0625 16:05:07.165973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0625 16:05:07.166057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0625 16:05:07.351656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0625 16:05:07.351701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0625 16:05:07.580369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0625 16:05:07.580462       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0625 16:05:07.902275       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0625 16:05:07.902363       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0625 16:05:07.905108       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0625 16:05:07.905175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0625 16:05:07.991929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0625 16:05:07.991978       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0625 16:05:08.060392       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0625 16:05:08.060565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0625 16:05:08.060509       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0625 16:05:08.060650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0625 16:05:08.484304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0625 16:05:08.484447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0625 16:05:08.633961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0625 16:05:08.633993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0625 16:05:09.435926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0625 16:05:09.436019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0625 16:05:10.901721       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 25 16:07:55 ha-674765 kubelet[1375]: I0625 16:07:55.586793    1375 scope.go:117] "RemoveContainer" containerID="5aa65b67e926f58e42af575f038a6429658821d33a89c7d8113e504ad3e6d174"
	Jun 25 16:07:55 ha-674765 kubelet[1375]: E0625 16:07:55.587729    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c227c5cf-2bd6-4ebf-9fdb-09d4229cf421)\"" pod="kube-system/storage-provisioner" podUID="c227c5cf-2bd6-4ebf-9fdb-09d4229cf421"
	Jun 25 16:07:59 ha-674765 kubelet[1375]: I0625 16:07:59.586667    1375 scope.go:117] "RemoveContainer" containerID="527ce3a2a7ca580efeb561b1d051220b75eabca666282f31d4b998998c5ae267"
	Jun 25 16:08:06 ha-674765 kubelet[1375]: E0625 16:08:06.614580    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 16:08:06 ha-674765 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:08:06 ha-674765 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:08:06 ha-674765 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:08:06 ha-674765 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 25 16:08:09 ha-674765 kubelet[1375]: I0625 16:08:09.587263    1375 scope.go:117] "RemoveContainer" containerID="5aa65b67e926f58e42af575f038a6429658821d33a89c7d8113e504ad3e6d174"
	Jun 25 16:08:09 ha-674765 kubelet[1375]: E0625 16:08:09.587545    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c227c5cf-2bd6-4ebf-9fdb-09d4229cf421)\"" pod="kube-system/storage-provisioner" podUID="c227c5cf-2bd6-4ebf-9fdb-09d4229cf421"
	Jun 25 16:08:22 ha-674765 kubelet[1375]: I0625 16:08:22.586799    1375 scope.go:117] "RemoveContainer" containerID="5aa65b67e926f58e42af575f038a6429658821d33a89c7d8113e504ad3e6d174"
	Jun 25 16:08:22 ha-674765 kubelet[1375]: E0625 16:08:22.587404    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c227c5cf-2bd6-4ebf-9fdb-09d4229cf421)\"" pod="kube-system/storage-provisioner" podUID="c227c5cf-2bd6-4ebf-9fdb-09d4229cf421"
	Jun 25 16:08:26 ha-674765 kubelet[1375]: I0625 16:08:26.586834    1375 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-674765" podUID="1d132475-65bb-43d1-9353-12b7be1f311f"
	Jun 25 16:08:26 ha-674765 kubelet[1375]: I0625 16:08:26.612418    1375 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-674765"
	Jun 25 16:08:36 ha-674765 kubelet[1375]: I0625 16:08:36.587546    1375 scope.go:117] "RemoveContainer" containerID="5aa65b67e926f58e42af575f038a6429658821d33a89c7d8113e504ad3e6d174"
	Jun 25 16:08:36 ha-674765 kubelet[1375]: E0625 16:08:36.587821    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c227c5cf-2bd6-4ebf-9fdb-09d4229cf421)\"" pod="kube-system/storage-provisioner" podUID="c227c5cf-2bd6-4ebf-9fdb-09d4229cf421"
	Jun 25 16:08:51 ha-674765 kubelet[1375]: I0625 16:08:51.586993    1375 scope.go:117] "RemoveContainer" containerID="5aa65b67e926f58e42af575f038a6429658821d33a89c7d8113e504ad3e6d174"
	Jun 25 16:08:51 ha-674765 kubelet[1375]: E0625 16:08:51.587323    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c227c5cf-2bd6-4ebf-9fdb-09d4229cf421)\"" pod="kube-system/storage-provisioner" podUID="c227c5cf-2bd6-4ebf-9fdb-09d4229cf421"
	Jun 25 16:09:04 ha-674765 kubelet[1375]: I0625 16:09:04.589298    1375 scope.go:117] "RemoveContainer" containerID="5aa65b67e926f58e42af575f038a6429658821d33a89c7d8113e504ad3e6d174"
	Jun 25 16:09:04 ha-674765 kubelet[1375]: E0625 16:09:04.589578    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c227c5cf-2bd6-4ebf-9fdb-09d4229cf421)\"" pod="kube-system/storage-provisioner" podUID="c227c5cf-2bd6-4ebf-9fdb-09d4229cf421"
	Jun 25 16:09:06 ha-674765 kubelet[1375]: E0625 16:09:06.613570    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 16:09:06 ha-674765 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:09:06 ha-674765 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:09:06 ha-674765 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:09:06 ha-674765 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0625 16:09:05.007857   43672 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19128-13846/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-674765 -n ha-674765
helpers_test.go:261: (dbg) Run:  kubectl --context ha-674765 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (359.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 stop -v=7 --alsologtostderr
E0625 16:09:29.128078   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-674765 stop -v=7 --alsologtostderr: exit status 82 (2m0.45169573s)

                                                
                                                
-- stdout --
	* Stopping node "ha-674765-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:09:24.856650   44086 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:09:24.856875   44086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:09:24.856884   44086 out.go:304] Setting ErrFile to fd 2...
	I0625 16:09:24.856888   44086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:09:24.857056   44086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:09:24.857252   44086 out.go:298] Setting JSON to false
	I0625 16:09:24.857317   44086 mustload.go:65] Loading cluster: ha-674765
	I0625 16:09:24.857647   44086 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:09:24.857730   44086 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 16:09:24.857896   44086 mustload.go:65] Loading cluster: ha-674765
	I0625 16:09:24.858064   44086 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:09:24.858088   44086 stop.go:39] StopHost: ha-674765-m04
	I0625 16:09:24.858734   44086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:09:24.858789   44086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:09:24.874288   44086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0625 16:09:24.874676   44086 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:09:24.875236   44086 main.go:141] libmachine: Using API Version  1
	I0625 16:09:24.875255   44086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:09:24.875558   44086 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:09:24.877809   44086 out.go:177] * Stopping node "ha-674765-m04"  ...
	I0625 16:09:24.879190   44086 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0625 16:09:24.879209   44086 main.go:141] libmachine: (ha-674765-m04) Calling .DriverName
	I0625 16:09:24.879403   44086 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0625 16:09:24.879418   44086 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHHostname
	I0625 16:09:24.882458   44086 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:09:24.882916   44086 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 17:08:53 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:09:24.882941   44086 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:09:24.883069   44086 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHPort
	I0625 16:09:24.883238   44086 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHKeyPath
	I0625 16:09:24.883410   44086 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHUsername
	I0625 16:09:24.883515   44086 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m04/id_rsa Username:docker}
	I0625 16:09:24.964266   44086 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0625 16:09:25.016640   44086 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0625 16:09:25.068254   44086 main.go:141] libmachine: Stopping "ha-674765-m04"...
	I0625 16:09:25.068282   44086 main.go:141] libmachine: (ha-674765-m04) Calling .GetState
	I0625 16:09:25.069856   44086 main.go:141] libmachine: (ha-674765-m04) Calling .Stop
	I0625 16:09:25.073403   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 0/120
	I0625 16:09:26.074779   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 1/120
	I0625 16:09:27.076174   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 2/120
	I0625 16:09:28.077461   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 3/120
	I0625 16:09:29.078721   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 4/120
	I0625 16:09:30.080965   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 5/120
	I0625 16:09:31.082384   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 6/120
	I0625 16:09:32.083662   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 7/120
	I0625 16:09:33.085668   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 8/120
	I0625 16:09:34.086990   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 9/120
	I0625 16:09:35.088080   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 10/120
	I0625 16:09:36.089535   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 11/120
	I0625 16:09:37.091385   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 12/120
	I0625 16:09:38.093265   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 13/120
	I0625 16:09:39.094483   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 14/120
	I0625 16:09:40.096200   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 15/120
	I0625 16:09:41.097577   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 16/120
	I0625 16:09:42.098884   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 17/120
	I0625 16:09:43.100540   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 18/120
	I0625 16:09:44.102642   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 19/120
	I0625 16:09:45.104141   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 20/120
	I0625 16:09:46.105808   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 21/120
	I0625 16:09:47.107098   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 22/120
	I0625 16:09:48.108532   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 23/120
	I0625 16:09:49.109748   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 24/120
	I0625 16:09:50.111597   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 25/120
	I0625 16:09:51.112832   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 26/120
	I0625 16:09:52.114277   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 27/120
	I0625 16:09:53.115932   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 28/120
	I0625 16:09:54.117134   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 29/120
	I0625 16:09:55.119279   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 30/120
	I0625 16:09:56.120696   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 31/120
	I0625 16:09:57.121966   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 32/120
	I0625 16:09:58.123421   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 33/120
	I0625 16:09:59.124730   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 34/120
	I0625 16:10:00.126687   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 35/120
	I0625 16:10:01.128790   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 36/120
	I0625 16:10:02.130530   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 37/120
	I0625 16:10:03.131780   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 38/120
	I0625 16:10:04.133208   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 39/120
	I0625 16:10:05.135235   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 40/120
	I0625 16:10:06.136860   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 41/120
	I0625 16:10:07.138099   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 42/120
	I0625 16:10:08.139595   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 43/120
	I0625 16:10:09.140866   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 44/120
	I0625 16:10:10.142802   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 45/120
	I0625 16:10:11.144100   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 46/120
	I0625 16:10:12.145475   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 47/120
	I0625 16:10:13.146953   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 48/120
	I0625 16:10:14.149024   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 49/120
	I0625 16:10:15.151198   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 50/120
	I0625 16:10:16.153690   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 51/120
	I0625 16:10:17.155255   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 52/120
	I0625 16:10:18.156566   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 53/120
	I0625 16:10:19.157898   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 54/120
	I0625 16:10:20.159875   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 55/120
	I0625 16:10:21.161510   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 56/120
	I0625 16:10:22.162864   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 57/120
	I0625 16:10:23.164876   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 58/120
	I0625 16:10:24.166063   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 59/120
	I0625 16:10:25.167976   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 60/120
	I0625 16:10:26.169215   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 61/120
	I0625 16:10:27.170562   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 62/120
	I0625 16:10:28.172043   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 63/120
	I0625 16:10:29.173411   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 64/120
	I0625 16:10:30.174858   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 65/120
	I0625 16:10:31.176217   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 66/120
	I0625 16:10:32.177975   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 67/120
	I0625 16:10:33.179630   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 68/120
	I0625 16:10:34.180890   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 69/120
	I0625 16:10:35.182563   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 70/120
	I0625 16:10:36.183987   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 71/120
	I0625 16:10:37.185474   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 72/120
	I0625 16:10:38.186782   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 73/120
	I0625 16:10:39.188949   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 74/120
	I0625 16:10:40.190802   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 75/120
	I0625 16:10:41.192491   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 76/120
	I0625 16:10:42.194140   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 77/120
	I0625 16:10:43.195398   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 78/120
	I0625 16:10:44.196860   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 79/120
	I0625 16:10:45.198896   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 80/120
	I0625 16:10:46.200843   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 81/120
	I0625 16:10:47.201990   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 82/120
	I0625 16:10:48.203229   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 83/120
	I0625 16:10:49.204936   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 84/120
	I0625 16:10:50.206747   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 85/120
	I0625 16:10:51.208070   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 86/120
	I0625 16:10:52.209525   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 87/120
	I0625 16:10:53.211232   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 88/120
	I0625 16:10:54.213033   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 89/120
	I0625 16:10:55.215224   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 90/120
	I0625 16:10:56.216896   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 91/120
	I0625 16:10:57.219069   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 92/120
	I0625 16:10:58.220486   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 93/120
	I0625 16:10:59.221963   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 94/120
	I0625 16:11:00.224189   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 95/120
	I0625 16:11:01.225522   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 96/120
	I0625 16:11:02.227114   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 97/120
	I0625 16:11:03.228497   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 98/120
	I0625 16:11:04.229736   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 99/120
	I0625 16:11:05.231623   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 100/120
	I0625 16:11:06.232937   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 101/120
	I0625 16:11:07.234150   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 102/120
	I0625 16:11:08.235460   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 103/120
	I0625 16:11:09.236815   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 104/120
	I0625 16:11:10.238622   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 105/120
	I0625 16:11:11.239966   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 106/120
	I0625 16:11:12.241065   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 107/120
	I0625 16:11:13.243412   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 108/120
	I0625 16:11:14.245400   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 109/120
	I0625 16:11:15.247434   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 110/120
	I0625 16:11:16.248633   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 111/120
	I0625 16:11:17.249908   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 112/120
	I0625 16:11:18.251234   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 113/120
	I0625 16:11:19.252470   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 114/120
	I0625 16:11:20.254193   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 115/120
	I0625 16:11:21.255396   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 116/120
	I0625 16:11:22.256804   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 117/120
	I0625 16:11:23.258104   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 118/120
	I0625 16:11:24.260048   44086 main.go:141] libmachine: (ha-674765-m04) Waiting for machine to stop 119/120
	I0625 16:11:25.260706   44086 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0625 16:11:25.260755   44086 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0625 16:11:25.262595   44086 out.go:177] 
	W0625 16:11:25.263883   44086 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0625 16:11:25.263898   44086 out.go:239] * 
	* 
	W0625 16:11:25.266160   44086 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0625 16:11:25.267460   44086 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-674765 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr: exit status 3 (19.009442339s)

                                                
                                                
-- stdout --
	ha-674765
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-674765-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:11:25.312331   44522 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:11:25.312554   44522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:11:25.312562   44522 out.go:304] Setting ErrFile to fd 2...
	I0625 16:11:25.312566   44522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:11:25.312722   44522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:11:25.312919   44522 out.go:298] Setting JSON to false
	I0625 16:11:25.312943   44522 mustload.go:65] Loading cluster: ha-674765
	I0625 16:11:25.313049   44522 notify.go:220] Checking for updates...
	I0625 16:11:25.313335   44522 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:11:25.313362   44522 status.go:255] checking status of ha-674765 ...
	I0625 16:11:25.313798   44522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:11:25.313862   44522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:11:25.331445   44522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I0625 16:11:25.331912   44522 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:11:25.332557   44522 main.go:141] libmachine: Using API Version  1
	I0625 16:11:25.332595   44522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:11:25.332939   44522 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:11:25.333130   44522 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 16:11:25.334820   44522 status.go:330] ha-674765 host status = "Running" (err=<nil>)
	I0625 16:11:25.334846   44522 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:11:25.335125   44522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:11:25.335165   44522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:11:25.349334   44522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46481
	I0625 16:11:25.349683   44522 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:11:25.350097   44522 main.go:141] libmachine: Using API Version  1
	I0625 16:11:25.350116   44522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:11:25.350403   44522 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:11:25.350582   44522 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:11:25.353106   44522 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:11:25.353513   44522 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:11:25.353542   44522 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:11:25.353647   44522 host.go:66] Checking if "ha-674765" exists ...
	I0625 16:11:25.353926   44522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:11:25.353969   44522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:11:25.368491   44522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0625 16:11:25.368843   44522 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:11:25.369278   44522 main.go:141] libmachine: Using API Version  1
	I0625 16:11:25.369304   44522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:11:25.369586   44522 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:11:25.369804   44522 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:11:25.369986   44522 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:11:25.370013   44522 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:11:25.372547   44522 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:11:25.372934   44522 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:11:25.372962   44522 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:11:25.373107   44522 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:11:25.373266   44522 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:11:25.373418   44522 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:11:25.373548   44522 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:11:25.460159   44522 ssh_runner.go:195] Run: systemctl --version
	I0625 16:11:25.466192   44522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:11:25.481929   44522 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:11:25.481958   44522 api_server.go:166] Checking apiserver status ...
	I0625 16:11:25.481985   44522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:11:25.497996   44522 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5154/cgroup
	W0625 16:11:25.507841   44522 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5154/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:11:25.507882   44522 ssh_runner.go:195] Run: ls
	I0625 16:11:25.512714   44522 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:11:25.516774   44522 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:11:25.516794   44522 status.go:422] ha-674765 apiserver status = Running (err=<nil>)
	I0625 16:11:25.516803   44522 status.go:257] ha-674765 status: &{Name:ha-674765 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:11:25.516816   44522 status.go:255] checking status of ha-674765-m02 ...
	I0625 16:11:25.517086   44522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:11:25.517122   44522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:11:25.531506   44522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I0625 16:11:25.531968   44522 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:11:25.532506   44522 main.go:141] libmachine: Using API Version  1
	I0625 16:11:25.532527   44522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:11:25.532808   44522 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:11:25.533022   44522 main.go:141] libmachine: (ha-674765-m02) Calling .GetState
	I0625 16:11:25.534580   44522 status.go:330] ha-674765-m02 host status = "Running" (err=<nil>)
	I0625 16:11:25.534598   44522 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:11:25.534957   44522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:11:25.534995   44522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:11:25.548668   44522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0625 16:11:25.549007   44522 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:11:25.549427   44522 main.go:141] libmachine: Using API Version  1
	I0625 16:11:25.549444   44522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:11:25.549756   44522 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:11:25.549927   44522 main.go:141] libmachine: (ha-674765-m02) Calling .GetIP
	I0625 16:11:25.552436   44522 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:11:25.552806   44522 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 17:06:59 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:11:25.552833   44522 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:11:25.552950   44522 host.go:66] Checking if "ha-674765-m02" exists ...
	I0625 16:11:25.553196   44522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:11:25.553224   44522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:11:25.567675   44522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39377
	I0625 16:11:25.568071   44522 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:11:25.568455   44522 main.go:141] libmachine: Using API Version  1
	I0625 16:11:25.568475   44522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:11:25.568731   44522 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:11:25.568877   44522 main.go:141] libmachine: (ha-674765-m02) Calling .DriverName
	I0625 16:11:25.569082   44522 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:11:25.569102   44522 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHHostname
	I0625 16:11:25.571469   44522 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:11:25.571805   44522 main.go:141] libmachine: (ha-674765-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f4:2d", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 17:06:59 +0000 UTC Type:0 Mac:52:54:00:10:f4:2d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-674765-m02 Clientid:01:52:54:00:10:f4:2d}
	I0625 16:11:25.571843   44522 main.go:141] libmachine: (ha-674765-m02) DBG | domain ha-674765-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:10:f4:2d in network mk-ha-674765
	I0625 16:11:25.571942   44522 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHPort
	I0625 16:11:25.572109   44522 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHKeyPath
	I0625 16:11:25.572326   44522 main.go:141] libmachine: (ha-674765-m02) Calling .GetSSHUsername
	I0625 16:11:25.572471   44522 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m02/id_rsa Username:docker}
	I0625 16:11:25.657284   44522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:11:25.676670   44522 kubeconfig.go:125] found "ha-674765" server: "https://192.168.39.254:8443"
	I0625 16:11:25.676697   44522 api_server.go:166] Checking apiserver status ...
	I0625 16:11:25.676734   44522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:11:25.691554   44522 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1411/cgroup
	W0625 16:11:25.703375   44522 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1411/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:11:25.703414   44522 ssh_runner.go:195] Run: ls
	I0625 16:11:25.707516   44522 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0625 16:11:25.711981   44522 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0625 16:11:25.712000   44522 status.go:422] ha-674765-m02 apiserver status = Running (err=<nil>)
	I0625 16:11:25.712009   44522 status.go:257] ha-674765-m02 status: &{Name:ha-674765-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:11:25.712032   44522 status.go:255] checking status of ha-674765-m04 ...
	I0625 16:11:25.712392   44522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:11:25.712432   44522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:11:25.727021   44522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0625 16:11:25.727379   44522 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:11:25.727770   44522 main.go:141] libmachine: Using API Version  1
	I0625 16:11:25.727787   44522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:11:25.728054   44522 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:11:25.728217   44522 main.go:141] libmachine: (ha-674765-m04) Calling .GetState
	I0625 16:11:25.729807   44522 status.go:330] ha-674765-m04 host status = "Running" (err=<nil>)
	I0625 16:11:25.729820   44522 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:11:25.730146   44522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:11:25.730185   44522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:11:25.744958   44522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34875
	I0625 16:11:25.745364   44522 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:11:25.745818   44522 main.go:141] libmachine: Using API Version  1
	I0625 16:11:25.745837   44522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:11:25.746144   44522 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:11:25.746330   44522 main.go:141] libmachine: (ha-674765-m04) Calling .GetIP
	I0625 16:11:25.748849   44522 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:11:25.749269   44522 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 17:08:53 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:11:25.749286   44522 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:11:25.749413   44522 host.go:66] Checking if "ha-674765-m04" exists ...
	I0625 16:11:25.749666   44522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:11:25.749696   44522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:11:25.763652   44522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39821
	I0625 16:11:25.764002   44522 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:11:25.764437   44522 main.go:141] libmachine: Using API Version  1
	I0625 16:11:25.764456   44522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:11:25.764727   44522 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:11:25.764905   44522 main.go:141] libmachine: (ha-674765-m04) Calling .DriverName
	I0625 16:11:25.765072   44522 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:11:25.765098   44522 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHHostname
	I0625 16:11:25.767631   44522 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:11:25.768024   44522 main.go:141] libmachine: (ha-674765-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:21:a2", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 17:08:53 +0000 UTC Type:0 Mac:52:54:00:7a:21:a2 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-674765-m04 Clientid:01:52:54:00:7a:21:a2}
	I0625 16:11:25.768074   44522 main.go:141] libmachine: (ha-674765-m04) DBG | domain ha-674765-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:7a:21:a2 in network mk-ha-674765
	I0625 16:11:25.768154   44522 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHPort
	I0625 16:11:25.768307   44522 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHKeyPath
	I0625 16:11:25.768463   44522 main.go:141] libmachine: (ha-674765-m04) Calling .GetSSHUsername
	I0625 16:11:25.768585   44522 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765-m04/id_rsa Username:docker}
	W0625 16:11:44.278636   44522 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.7:22: connect: no route to host
	W0625 16:11:44.278711   44522 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.7:22: connect: no route to host
	E0625 16:11:44.278724   44522 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.7:22: connect: no route to host
	I0625 16:11:44.278730   44522 status.go:257] ha-674765-m04 status: &{Name:ha-674765-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0625 16:11:44.278744   44522 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.7:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-674765 -n ha-674765
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-674765 logs -n 25: (1.639157257s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-674765 ssh -n ha-674765-m02 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m03_ha-674765-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m03:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04:/home/docker/cp-test_ha-674765-m03_ha-674765-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m04 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m03_ha-674765-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-674765 cp testdata/cp-test.txt                                                | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2213486447/001/cp-test_ha-674765-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765:/home/docker/cp-test_ha-674765-m04_ha-674765.txt                       |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765 sudo cat                                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m04_ha-674765.txt                                 |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m02:/home/docker/cp-test_ha-674765-m04_ha-674765-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m02 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m04_ha-674765-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m03:/home/docker/cp-test_ha-674765-m04_ha-674765-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n                                                                 | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | ha-674765-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-674765 ssh -n ha-674765-m03 sudo cat                                          | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC | 25 Jun 24 15:59 UTC |
	|         | /home/docker/cp-test_ha-674765-m04_ha-674765-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-674765 node stop m02 -v=7                                                     | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 15:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-674765 node start m02 -v=7                                                    | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 16:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-674765 -v=7                                                           | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 16:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-674765 -v=7                                                                | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 16:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-674765 --wait=true -v=7                                                    | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 16:05 UTC | 25 Jun 24 16:09 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-674765                                                                | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 16:09 UTC |                     |
	| node    | ha-674765 node delete m03 -v=7                                                   | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 16:09 UTC | 25 Jun 24 16:09 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-674765 stop -v=7                                                              | ha-674765 | jenkins | v1.33.1 | 25 Jun 24 16:09 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/25 16:05:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0625 16:05:09.817265   42394 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:05:09.817520   42394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:05:09.817529   42394 out.go:304] Setting ErrFile to fd 2...
	I0625 16:05:09.817534   42394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:05:09.817691   42394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:05:09.818224   42394 out.go:298] Setting JSON to false
	I0625 16:05:09.819082   42394 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6454,"bootTime":1719325056,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 16:05:09.819137   42394 start.go:139] virtualization: kvm guest
	I0625 16:05:09.821289   42394 out.go:177] * [ha-674765] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0625 16:05:09.822774   42394 out.go:177]   - MINIKUBE_LOCATION=19128
	I0625 16:05:09.822801   42394 notify.go:220] Checking for updates...
	I0625 16:05:09.825480   42394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 16:05:09.826758   42394 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 16:05:09.827938   42394 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:05:09.829113   42394 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0625 16:05:09.830302   42394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0625 16:05:09.831775   42394 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:05:09.831878   42394 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 16:05:09.832267   42394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:05:09.832318   42394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:05:09.847483   42394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I0625 16:05:09.847979   42394 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:05:09.848550   42394 main.go:141] libmachine: Using API Version  1
	I0625 16:05:09.848574   42394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:05:09.848930   42394 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:05:09.849094   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:05:09.882021   42394 out.go:177] * Using the kvm2 driver based on existing profile
	I0625 16:05:09.883677   42394 start.go:297] selected driver: kvm2
	I0625 16:05:09.883690   42394 start.go:901] validating driver "kvm2" against &{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:05:09.883853   42394 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0625 16:05:09.884271   42394 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:05:09.884343   42394 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19128-13846/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0625 16:05:09.898595   42394 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0625 16:05:09.899222   42394 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 16:05:09.899251   42394 cni.go:84] Creating CNI manager for ""
	I0625 16:05:09.899258   42394 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0625 16:05:09.899331   42394 start.go:340] cluster config:
	{Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:05:09.899470   42394 iso.go:125] acquiring lock: {Name:mk76df652d5e768afc73443035d5ecb8b75ed16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:05:09.901092   42394 out.go:177] * Starting "ha-674765" primary control-plane node in "ha-674765" cluster
	I0625 16:05:09.902178   42394 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 16:05:09.902211   42394 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0625 16:05:09.902221   42394 cache.go:56] Caching tarball of preloaded images
	I0625 16:05:09.902276   42394 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 16:05:09.902286   42394 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0625 16:05:09.902397   42394 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/config.json ...
	I0625 16:05:09.902603   42394 start.go:360] acquireMachinesLock for ha-674765: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 16:05:09.902655   42394 start.go:364] duration metric: took 35.527µs to acquireMachinesLock for "ha-674765"
	I0625 16:05:09.902668   42394 start.go:96] Skipping create...Using existing machine configuration
	I0625 16:05:09.902678   42394 fix.go:54] fixHost starting: 
	I0625 16:05:09.902947   42394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:05:09.902976   42394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:05:09.915910   42394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36393
	I0625 16:05:09.916316   42394 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:05:09.916838   42394 main.go:141] libmachine: Using API Version  1
	I0625 16:05:09.916859   42394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:05:09.917146   42394 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:05:09.917310   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:05:09.917445   42394 main.go:141] libmachine: (ha-674765) Calling .GetState
	I0625 16:05:09.919017   42394 fix.go:112] recreateIfNeeded on ha-674765: state=Running err=<nil>
	W0625 16:05:09.919049   42394 fix.go:138] unexpected machine state, will restart: <nil>
	I0625 16:05:09.920810   42394 out.go:177] * Updating the running kvm2 "ha-674765" VM ...
	I0625 16:05:09.922001   42394 machine.go:94] provisionDockerMachine start ...
	I0625 16:05:09.922027   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:05:09.922232   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:05:09.924729   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:09.925157   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:05:09.925179   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:09.925377   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:05:09.925551   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:09.925704   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:09.925852   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:05:09.926013   42394 main.go:141] libmachine: Using SSH client type: native
	I0625 16:05:09.926214   42394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 16:05:09.926224   42394 main.go:141] libmachine: About to run SSH command:
	hostname
	I0625 16:05:10.040686   42394 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-674765
	
	I0625 16:05:10.040715   42394 main.go:141] libmachine: (ha-674765) Calling .GetMachineName
	I0625 16:05:10.040965   42394 buildroot.go:166] provisioning hostname "ha-674765"
	I0625 16:05:10.040989   42394 main.go:141] libmachine: (ha-674765) Calling .GetMachineName
	I0625 16:05:10.041210   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:05:10.043642   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.043961   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:05:10.043991   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.044126   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:05:10.044310   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:10.044470   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:10.044573   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:05:10.044749   42394 main.go:141] libmachine: Using SSH client type: native
	I0625 16:05:10.044910   42394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 16:05:10.044922   42394 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-674765 && echo "ha-674765" | sudo tee /etc/hostname
	I0625 16:05:10.165501   42394 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-674765
	
	I0625 16:05:10.165525   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:05:10.168115   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.168467   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:05:10.168497   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.168659   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:05:10.168829   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:10.168955   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:10.169089   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:05:10.169207   42394 main.go:141] libmachine: Using SSH client type: native
	I0625 16:05:10.169392   42394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 16:05:10.169408   42394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-674765' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-674765/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-674765' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 16:05:10.280769   42394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 16:05:10.280814   42394 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 16:05:10.280845   42394 buildroot.go:174] setting up certificates
	I0625 16:05:10.280853   42394 provision.go:84] configureAuth start
	I0625 16:05:10.280864   42394 main.go:141] libmachine: (ha-674765) Calling .GetMachineName
	I0625 16:05:10.281129   42394 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:05:10.283846   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.284168   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:05:10.284195   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.284332   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:05:10.286056   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.286376   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:05:10.286394   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.286558   42394 provision.go:143] copyHostCerts
	I0625 16:05:10.286588   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 16:05:10.286643   42394 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem, removing ...
	I0625 16:05:10.286654   42394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 16:05:10.286728   42394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 16:05:10.286822   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 16:05:10.286852   42394 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem, removing ...
	I0625 16:05:10.286863   42394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 16:05:10.286901   42394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 16:05:10.286967   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 16:05:10.286989   42394 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem, removing ...
	I0625 16:05:10.286995   42394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 16:05:10.287028   42394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 16:05:10.287098   42394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.ha-674765 san=[127.0.0.1 192.168.39.128 ha-674765 localhost minikube]
	I0625 16:05:10.610048   42394 provision.go:177] copyRemoteCerts
	I0625 16:05:10.610104   42394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 16:05:10.610128   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:05:10.612686   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.612995   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:05:10.613024   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.613219   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:05:10.613444   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:10.613576   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:05:10.613728   42394 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:05:10.701508   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0625 16:05:10.701582   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 16:05:10.729319   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0625 16:05:10.729388   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0625 16:05:10.757285   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0625 16:05:10.757368   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0625 16:05:10.782256   42394 provision.go:87] duration metric: took 501.388189ms to configureAuth
	I0625 16:05:10.782283   42394 buildroot.go:189] setting minikube options for container-runtime
	I0625 16:05:10.782550   42394 config.go:182] Loaded profile config "ha-674765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:05:10.782658   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:05:10.785018   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.785514   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:05:10.785543   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:05:10.785646   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:05:10.785851   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:10.786007   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:05:10.786151   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:05:10.786285   42394 main.go:141] libmachine: Using SSH client type: native
	I0625 16:05:10.786463   42394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 16:05:10.786505   42394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 16:06:41.661731   42394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 16:06:41.661758   42394 machine.go:97] duration metric: took 1m31.739743418s to provisionDockerMachine
	I0625 16:06:41.661772   42394 start.go:293] postStartSetup for "ha-674765" (driver="kvm2")
	I0625 16:06:41.661786   42394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 16:06:41.661808   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:06:41.662122   42394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 16:06:41.662191   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:06:41.665074   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.665486   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:06:41.665518   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.665642   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:06:41.665819   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:06:41.665985   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:06:41.666131   42394 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:06:41.749633   42394 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 16:06:41.753985   42394 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 16:06:41.754006   42394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 16:06:41.754069   42394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 16:06:41.754144   42394 filesync.go:149] local asset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> 212392.pem in /etc/ssl/certs
	I0625 16:06:41.754155   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /etc/ssl/certs/212392.pem
	I0625 16:06:41.754234   42394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0625 16:06:41.763287   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:06:41.786264   42394 start.go:296] duration metric: took 124.481229ms for postStartSetup
	I0625 16:06:41.786297   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:06:41.786549   42394 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0625 16:06:41.786573   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:06:41.788681   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.788978   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:06:41.789006   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.789138   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:06:41.789299   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:06:41.789463   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:06:41.789597   42394 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	W0625 16:06:41.872420   42394 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0625 16:06:41.872441   42394 fix.go:56] duration metric: took 1m31.96976201s for fixHost
	I0625 16:06:41.872465   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:06:41.874807   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.875178   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:06:41.875199   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.875345   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:06:41.875513   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:06:41.875660   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:06:41.875794   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:06:41.875951   42394 main.go:141] libmachine: Using SSH client type: native
	I0625 16:06:41.876148   42394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0625 16:06:41.876160   42394 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0625 16:06:41.982782   42394 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719331601.945785731
	
	I0625 16:06:41.982807   42394 fix.go:216] guest clock: 1719331601.945785731
	I0625 16:06:41.982817   42394 fix.go:229] Guest: 2024-06-25 16:06:41.945785731 +0000 UTC Remote: 2024-06-25 16:06:41.872450956 +0000 UTC m=+92.088965672 (delta=73.334775ms)
	I0625 16:06:41.982849   42394 fix.go:200] guest clock delta is within tolerance: 73.334775ms
	I0625 16:06:41.982858   42394 start.go:83] releasing machines lock for "ha-674765", held for 1m32.080192997s
	I0625 16:06:41.982887   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:06:41.983141   42394 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:06:41.985489   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.985847   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:06:41.985873   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.986022   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:06:41.986495   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:06:41.986667   42394 main.go:141] libmachine: (ha-674765) Calling .DriverName
	I0625 16:06:41.986725   42394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 16:06:41.986778   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:06:41.986843   42394 ssh_runner.go:195] Run: cat /version.json
	I0625 16:06:41.986864   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHHostname
	I0625 16:06:41.989114   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.989131   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.989488   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:06:41.989513   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.989538   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:06:41.989554   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:41.989718   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:06:41.989722   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHPort
	I0625 16:06:41.989872   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:06:41.989925   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHKeyPath
	I0625 16:06:41.990000   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:06:41.990060   42394 main.go:141] libmachine: (ha-674765) Calling .GetSSHUsername
	I0625 16:06:41.990119   42394 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:06:41.990160   42394 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/ha-674765/id_rsa Username:docker}
	I0625 16:06:42.075126   42394 ssh_runner.go:195] Run: systemctl --version
	I0625 16:06:42.100317   42394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 16:06:42.270994   42394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0625 16:06:42.276889   42394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 16:06:42.276947   42394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 16:06:42.286219   42394 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0625 16:06:42.286237   42394 start.go:494] detecting cgroup driver to use...
	I0625 16:06:42.286301   42394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 16:06:42.303209   42394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 16:06:42.317211   42394 docker.go:217] disabling cri-docker service (if available) ...
	I0625 16:06:42.317249   42394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 16:06:42.330574   42394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 16:06:42.343639   42394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 16:06:42.488289   42394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 16:06:42.632161   42394 docker.go:233] disabling docker service ...
	I0625 16:06:42.632220   42394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 16:06:42.649058   42394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 16:06:42.662269   42394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 16:06:42.805188   42394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 16:06:42.944795   42394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 16:06:42.958589   42394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 16:06:42.977288   42394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0625 16:06:42.977349   42394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:06:42.987568   42394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 16:06:42.987616   42394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:06:42.997392   42394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:06:43.007273   42394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:06:43.017239   42394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 16:06:43.027334   42394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:06:43.037256   42394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:06:43.048837   42394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:06:43.058485   42394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 16:06:43.067715   42394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 16:06:43.076430   42394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:06:43.215238   42394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 16:06:47.621064   42394 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.405797623s)
	I0625 16:06:47.621097   42394 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 16:06:47.621137   42394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 16:06:47.626000   42394 start.go:562] Will wait 60s for crictl version
	I0625 16:06:47.626033   42394 ssh_runner.go:195] Run: which crictl
	I0625 16:06:47.629820   42394 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 16:06:47.666102   42394 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 16:06:47.666162   42394 ssh_runner.go:195] Run: crio --version
	I0625 16:06:47.695046   42394 ssh_runner.go:195] Run: crio --version
	I0625 16:06:47.724143   42394 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0625 16:06:47.725373   42394 main.go:141] libmachine: (ha-674765) Calling .GetIP
	I0625 16:06:47.727754   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:47.728112   42394 main.go:141] libmachine: (ha-674765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:3a:48", ip: ""} in network mk-ha-674765: {Iface:virbr1 ExpiryTime:2024-06-25 16:55:38 +0000 UTC Type:0 Mac:52:54:00:6e:3a:48 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-674765 Clientid:01:52:54:00:6e:3a:48}
	I0625 16:06:47.728136   42394 main.go:141] libmachine: (ha-674765) DBG | domain ha-674765 has defined IP address 192.168.39.128 and MAC address 52:54:00:6e:3a:48 in network mk-ha-674765
	I0625 16:06:47.728354   42394 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0625 16:06:47.732896   42394 kubeadm.go:877] updating cluster {Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0625 16:06:47.733074   42394 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 16:06:47.733133   42394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:06:47.778894   42394 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 16:06:47.778916   42394 crio.go:433] Images already preloaded, skipping extraction
	I0625 16:06:47.778966   42394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:06:47.811499   42394 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 16:06:47.811518   42394 cache_images.go:84] Images are preloaded, skipping loading
	I0625 16:06:47.811531   42394 kubeadm.go:928] updating node { 192.168.39.128 8443 v1.30.2 crio true true} ...
	I0625 16:06:47.811657   42394 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-674765 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 16:06:47.811764   42394 ssh_runner.go:195] Run: crio config
	I0625 16:06:47.857857   42394 cni.go:84] Creating CNI manager for ""
	I0625 16:06:47.857873   42394 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0625 16:06:47.857887   42394 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0625 16:06:47.857906   42394 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-674765 NodeName:ha-674765 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0625 16:06:47.858029   42394 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-674765"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0625 16:06:47.858046   42394 kube-vip.go:115] generating kube-vip config ...
	I0625 16:06:47.858082   42394 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0625 16:06:47.869688   42394 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0625 16:06:47.869770   42394 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0625 16:06:47.869812   42394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0625 16:06:47.878780   42394 binaries.go:44] Found k8s binaries, skipping transfer
	I0625 16:06:47.878839   42394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0625 16:06:47.887764   42394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0625 16:06:47.904389   42394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 16:06:47.920572   42394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0625 16:06:47.936468   42394 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0625 16:06:47.952990   42394 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0625 16:06:47.957290   42394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:06:48.098562   42394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 16:06:48.113331   42394 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765 for IP: 192.168.39.128
	I0625 16:06:48.113357   42394 certs.go:194] generating shared ca certs ...
	I0625 16:06:48.113377   42394 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:06:48.113527   42394 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 16:06:48.113579   42394 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 16:06:48.113593   42394 certs.go:256] generating profile certs ...
	I0625 16:06:48.113687   42394 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/client.key
	I0625 16:06:48.113723   42394 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.4cb2e099
	I0625 16:06:48.113749   42394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.4cb2e099 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.53 192.168.39.77 192.168.39.254]
	I0625 16:06:48.207036   42394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.4cb2e099 ...
	I0625 16:06:48.207065   42394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.4cb2e099: {Name:mk0733bebf3f9051b8529571108dd2538df7993c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:06:48.207231   42394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.4cb2e099 ...
	I0625 16:06:48.207245   42394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.4cb2e099: {Name:mk8f24a82632e47ed049a4c94ea6a0986178e217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:06:48.207318   42394 certs.go:381] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt.4cb2e099 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt
	I0625 16:06:48.207454   42394 certs.go:385] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key.4cb2e099 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key
	I0625 16:06:48.207587   42394 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key
	I0625 16:06:48.207601   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0625 16:06:48.207614   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0625 16:06:48.207626   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0625 16:06:48.207639   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0625 16:06:48.207651   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0625 16:06:48.207663   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0625 16:06:48.207675   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0625 16:06:48.207686   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0625 16:06:48.207731   42394 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem (1338 bytes)
	W0625 16:06:48.207756   42394 certs.go:480] ignoring /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239_empty.pem, impossibly tiny 0 bytes
	I0625 16:06:48.207766   42394 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 16:06:48.207787   42394 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 16:06:48.207807   42394 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 16:06:48.207830   42394 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 16:06:48.207864   42394 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:06:48.207890   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:06:48.207903   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem -> /usr/share/ca-certificates/21239.pem
	I0625 16:06:48.207915   42394 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /usr/share/ca-certificates/212392.pem
	I0625 16:06:48.208457   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 16:06:48.233191   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 16:06:48.256024   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 16:06:48.278992   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 16:06:48.302164   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0625 16:06:48.324402   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0625 16:06:48.347199   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 16:06:48.369959   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/ha-674765/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0625 16:06:48.392547   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 16:06:48.414855   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem --> /usr/share/ca-certificates/21239.pem (1338 bytes)
	I0625 16:06:48.437793   42394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /usr/share/ca-certificates/212392.pem (1708 bytes)
	I0625 16:06:48.460737   42394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0625 16:06:48.476664   42394 ssh_runner.go:195] Run: openssl version
	I0625 16:06:48.482526   42394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 16:06:48.493355   42394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:06:48.498067   42394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:06:48.498133   42394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:06:48.504113   42394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 16:06:48.513540   42394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21239.pem && ln -fs /usr/share/ca-certificates/21239.pem /etc/ssl/certs/21239.pem"
	I0625 16:06:48.524097   42394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21239.pem
	I0625 16:06:48.528662   42394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 16:06:48.528696   42394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21239.pem
	I0625 16:06:48.534440   42394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21239.pem /etc/ssl/certs/51391683.0"
	I0625 16:06:48.545113   42394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212392.pem && ln -fs /usr/share/ca-certificates/212392.pem /etc/ssl/certs/212392.pem"
	I0625 16:06:48.555854   42394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212392.pem
	I0625 16:06:48.560160   42394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 16:06:48.560202   42394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212392.pem
	I0625 16:06:48.565968   42394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/212392.pem /etc/ssl/certs/3ec20f2e.0"
	I0625 16:06:48.575602   42394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 16:06:48.580058   42394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0625 16:06:48.585491   42394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0625 16:06:48.591444   42394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0625 16:06:48.596827   42394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0625 16:06:48.602143   42394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0625 16:06:48.607441   42394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0625 16:06:48.612779   42394 kubeadm.go:391] StartCluster: {Name:ha-674765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-674765 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:06:48.612892   42394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0625 16:06:48.612922   42394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0625 16:06:48.650583   42394 cri.go:89] found id: "daf79ee8eb5658497e09cbd16752883ca88b8bfd2864ee00372d27eeb5806285"
	I0625 16:06:48.650605   42394 cri.go:89] found id: "fadbd7cdc44f4b5fab7ac7f2de7b57b27f6dab29aa6dba74bb989ef8265b7cd2"
	I0625 16:06:48.650610   42394 cri.go:89] found id: "1ec9f1864b5040c1de810ed7acdfe5a3f522fad6960bc9d5b6942aceabad78e1"
	I0625 16:06:48.650614   42394 cri.go:89] found id: "ee37c24ba30f73306a896f334f612e36909a30fe60cc981a14e0a33c613ee062"
	I0625 16:06:48.650618   42394 cri.go:89] found id: "ac8ac5af3896e66b7a766c2dee0a0ca88408fc6840949a3c60309e6d98f11fa1"
	I0625 16:06:48.650622   42394 cri.go:89] found id: "ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b"
	I0625 16:06:48.650639   42394 cri.go:89] found id: "5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8"
	I0625 16:06:48.650778   42394 cri.go:89] found id: "7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c"
	I0625 16:06:48.650785   42394 cri.go:89] found id: "c3ed8ce894547a7bc3deba857b5d7d733af8ba225cb579c469f090460bff27d3"
	I0625 16:06:48.650792   42394 cri.go:89] found id: "a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65"
	I0625 16:06:48.650796   42394 cri.go:89] found id: "9938e238e129cd0d797a5de776e0d7b756bc8f39188223f4151974b19fb7506c"
	I0625 16:06:48.650800   42394 cri.go:89] found id: "a40f818bed683af529089283a92813b3d87d93d9cb9290b6081645f3bced82fa"
	I0625 16:06:48.650804   42394 cri.go:89] found id: "e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32"
	I0625 16:06:48.650807   42394 cri.go:89] found id: ""
	I0625 16:06:48.650850   42394 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.885302696Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331904885279329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4edb9130-0442-433c-a1ee-51978234e8ba name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.886029255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee886683-e8e4-4b2d-918c-d3ce07da2899 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.886105650Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee886683-e8e4-4b2d-918c-d3ce07da2899 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.886479873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e940f7c608e809152329248d962b7489987764dda2f7576dc8ae2d5448a126b,PodSandboxId:c65aa084da236ba3d7ea0e7917b41dbcdeb30f405ebd8c15df8171e1500f95f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1719331792608349997,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3426ec1f55cfcf657863a2bb9d1d1ed319358c204b3013dc6ea1040ef44ede,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719331679606027881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62eb4fc54721a6cc41ca7c6a7e298bebe70e0bd709ee162f8002bcd99b09f69,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719331650600404866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91160cb26919875a6826440f019fdc7a29c4cb1cca9f728b8634425c5d0d0055,PodSandboxId:a74cac97870233d5be7e99d859054dbe59bb62b62dcebb9b93d53d7d97e6ff21,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331647935856429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5616a5da2347e48b25e5207a5d383a1b8395ebbdf7444bc962fbab867ccdb3e,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719331645702587343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c1834b73fff1174eded512f55252efedba82c00af1234c13e73a27f339b56c,PodSandboxId:4d0b2a1c727ceef717a7d10522e55f9b21763d10b548f6f8ca153b04c08a6ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719331628128137056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a01324348bb22c8ce03c490b59b42a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:993db242e335019216c340e823497dc2a88a83153badf0eecd3d96c454418fa2,PodSandboxId:2a19a6bbe06bb54deaff5da581b9b45fcd9494227983dd022d0426bb0ab3ccd9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719331614792099718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:5aa65b67e926f58e42af575f038a6429658821d33a89c7d8113e504ad3e6d174,PodSandboxId:c65aa084da236ba3d7ea0e7917b41dbcdeb30f405ebd8c15df8171e1500f95f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719331614912204262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967eda17c
ca156722128d0041068e08832d6bb3264caea0bc5fd19be28bf6525,PodSandboxId:cb4f06a80c952a0b7022ea2ce0a18462d11c45e82ab40aa1a165b789f8a376ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614718978303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca99b81f8f3af37b3a11b2b7acc63b243a795b431477e675b66e1ee8e98320f2,PodSandboxId:b24964e6136b211047c159c805ba1b9d39cffa512722bad477a943783aa84d2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614646463810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd04da046aa1783311d35bbb47f916525d9abb5c660cc42d5a6bdebb5c66006,PodSandboxId:bc7e32fc0f4498733a23c791acb9f215a768d0753f5b4a856a40a6f127b6e5fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719331614552451117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302139a799fb523035e8a52ecadeecbc2fbc59026ad9ff69cbc5264b7192ee4d,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719331614417774935,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d8af94261757538ed64ce37812b4b63ab671c65b53859c558794dffd24a708,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719331614473721171,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f19173
56d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e9ec15e9ff71c88b7a6fa2117facf91756b7926806742ce61de5689d4eb2a9a,PodSandboxId:22f98f821bc50e6c29db3ed17ae565a5aaa5284f725895f2745c392ce3d8c318,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719331614384125800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Anno
tations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527ce3a2a7ca580efeb561b1d051220b75eabca666282f31d4b998998c5ae267,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719331610134727020,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kuberne
tes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719331123602540696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernete
s.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982140261149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982105192113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b
34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719330979753078901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a4
7425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719330960414738558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1719330960349486669,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee886683-e8e4-4b2d-918c-d3ce07da2899 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.932764594Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b2c5645-0721-4701-bae8-d513c209301b name=/runtime.v1.RuntimeService/Version
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.932909444Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b2c5645-0721-4701-bae8-d513c209301b name=/runtime.v1.RuntimeService/Version
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.934647457Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0b22537-0c6e-4181-8362-666f7e0c6046 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.935392293Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331904935371448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0b22537-0c6e-4181-8362-666f7e0c6046 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.935855386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04a291ec-f628-4a1e-897e-319f5e6ad545 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.936002443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04a291ec-f628-4a1e-897e-319f5e6ad545 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.936392430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e940f7c608e809152329248d962b7489987764dda2f7576dc8ae2d5448a126b,PodSandboxId:c65aa084da236ba3d7ea0e7917b41dbcdeb30f405ebd8c15df8171e1500f95f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1719331792608349997,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3426ec1f55cfcf657863a2bb9d1d1ed319358c204b3013dc6ea1040ef44ede,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719331679606027881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62eb4fc54721a6cc41ca7c6a7e298bebe70e0bd709ee162f8002bcd99b09f69,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719331650600404866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91160cb26919875a6826440f019fdc7a29c4cb1cca9f728b8634425c5d0d0055,PodSandboxId:a74cac97870233d5be7e99d859054dbe59bb62b62dcebb9b93d53d7d97e6ff21,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331647935856429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5616a5da2347e48b25e5207a5d383a1b8395ebbdf7444bc962fbab867ccdb3e,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719331645702587343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c1834b73fff1174eded512f55252efedba82c00af1234c13e73a27f339b56c,PodSandboxId:4d0b2a1c727ceef717a7d10522e55f9b21763d10b548f6f8ca153b04c08a6ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719331628128137056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a01324348bb22c8ce03c490b59b42a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:993db242e335019216c340e823497dc2a88a83153badf0eecd3d96c454418fa2,PodSandboxId:2a19a6bbe06bb54deaff5da581b9b45fcd9494227983dd022d0426bb0ab3ccd9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719331614792099718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:5aa65b67e926f58e42af575f038a6429658821d33a89c7d8113e504ad3e6d174,PodSandboxId:c65aa084da236ba3d7ea0e7917b41dbcdeb30f405ebd8c15df8171e1500f95f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719331614912204262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967eda17c
ca156722128d0041068e08832d6bb3264caea0bc5fd19be28bf6525,PodSandboxId:cb4f06a80c952a0b7022ea2ce0a18462d11c45e82ab40aa1a165b789f8a376ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614718978303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca99b81f8f3af37b3a11b2b7acc63b243a795b431477e675b66e1ee8e98320f2,PodSandboxId:b24964e6136b211047c159c805ba1b9d39cffa512722bad477a943783aa84d2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614646463810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd04da046aa1783311d35bbb47f916525d9abb5c660cc42d5a6bdebb5c66006,PodSandboxId:bc7e32fc0f4498733a23c791acb9f215a768d0753f5b4a856a40a6f127b6e5fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719331614552451117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302139a799fb523035e8a52ecadeecbc2fbc59026ad9ff69cbc5264b7192ee4d,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719331614417774935,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d8af94261757538ed64ce37812b4b63ab671c65b53859c558794dffd24a708,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719331614473721171,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f19173
56d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e9ec15e9ff71c88b7a6fa2117facf91756b7926806742ce61de5689d4eb2a9a,PodSandboxId:22f98f821bc50e6c29db3ed17ae565a5aaa5284f725895f2745c392ce3d8c318,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719331614384125800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Anno
tations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527ce3a2a7ca580efeb561b1d051220b75eabca666282f31d4b998998c5ae267,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719331610134727020,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kuberne
tes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719331123602540696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernete
s.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982140261149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982105192113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b
34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719330979753078901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a4
7425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719330960414738558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1719330960349486669,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04a291ec-f628-4a1e-897e-319f5e6ad545 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.989064097Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b588d8f-9883-44d7-b57a-226521ca7ec3 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.989139032Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b588d8f-9883-44d7-b57a-226521ca7ec3 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.990428442Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0c2b007-09ca-40f4-877f-496b136a8038 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.990833276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331904990810290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0c2b007-09ca-40f4-877f-496b136a8038 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.991691620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08bbd9f2-bbe2-4658-bee8-f710dc1fc622 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.991765985Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08bbd9f2-bbe2-4658-bee8-f710dc1fc622 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:11:44 ha-674765 crio[3936]: time="2024-06-25 16:11:44.992313817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e940f7c608e809152329248d962b7489987764dda2f7576dc8ae2d5448a126b,PodSandboxId:c65aa084da236ba3d7ea0e7917b41dbcdeb30f405ebd8c15df8171e1500f95f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1719331792608349997,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3426ec1f55cfcf657863a2bb9d1d1ed319358c204b3013dc6ea1040ef44ede,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719331679606027881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62eb4fc54721a6cc41ca7c6a7e298bebe70e0bd709ee162f8002bcd99b09f69,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719331650600404866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91160cb26919875a6826440f019fdc7a29c4cb1cca9f728b8634425c5d0d0055,PodSandboxId:a74cac97870233d5be7e99d859054dbe59bb62b62dcebb9b93d53d7d97e6ff21,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331647935856429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5616a5da2347e48b25e5207a5d383a1b8395ebbdf7444bc962fbab867ccdb3e,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719331645702587343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c1834b73fff1174eded512f55252efedba82c00af1234c13e73a27f339b56c,PodSandboxId:4d0b2a1c727ceef717a7d10522e55f9b21763d10b548f6f8ca153b04c08a6ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719331628128137056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a01324348bb22c8ce03c490b59b42a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:993db242e335019216c340e823497dc2a88a83153badf0eecd3d96c454418fa2,PodSandboxId:2a19a6bbe06bb54deaff5da581b9b45fcd9494227983dd022d0426bb0ab3ccd9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719331614792099718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:5aa65b67e926f58e42af575f038a6429658821d33a89c7d8113e504ad3e6d174,PodSandboxId:c65aa084da236ba3d7ea0e7917b41dbcdeb30f405ebd8c15df8171e1500f95f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719331614912204262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967eda17c
ca156722128d0041068e08832d6bb3264caea0bc5fd19be28bf6525,PodSandboxId:cb4f06a80c952a0b7022ea2ce0a18462d11c45e82ab40aa1a165b789f8a376ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614718978303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca99b81f8f3af37b3a11b2b7acc63b243a795b431477e675b66e1ee8e98320f2,PodSandboxId:b24964e6136b211047c159c805ba1b9d39cffa512722bad477a943783aa84d2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614646463810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd04da046aa1783311d35bbb47f916525d9abb5c660cc42d5a6bdebb5c66006,PodSandboxId:bc7e32fc0f4498733a23c791acb9f215a768d0753f5b4a856a40a6f127b6e5fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719331614552451117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302139a799fb523035e8a52ecadeecbc2fbc59026ad9ff69cbc5264b7192ee4d,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719331614417774935,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d8af94261757538ed64ce37812b4b63ab671c65b53859c558794dffd24a708,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719331614473721171,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f19173
56d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e9ec15e9ff71c88b7a6fa2117facf91756b7926806742ce61de5689d4eb2a9a,PodSandboxId:22f98f821bc50e6c29db3ed17ae565a5aaa5284f725895f2745c392ce3d8c318,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719331614384125800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Anno
tations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527ce3a2a7ca580efeb561b1d051220b75eabca666282f31d4b998998c5ae267,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719331610134727020,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kuberne
tes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719331123602540696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernete
s.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982140261149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982105192113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b
34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719330979753078901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a4
7425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719330960414738558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1719330960349486669,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08bbd9f2-bbe2-4658-bee8-f710dc1fc622 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:11:45 ha-674765 crio[3936]: time="2024-06-25 16:11:45.038159791Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59d73801-4859-4b91-9304-fc1509abd2fa name=/runtime.v1.RuntimeService/Version
	Jun 25 16:11:45 ha-674765 crio[3936]: time="2024-06-25 16:11:45.038289784Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59d73801-4859-4b91-9304-fc1509abd2fa name=/runtime.v1.RuntimeService/Version
	Jun 25 16:11:45 ha-674765 crio[3936]: time="2024-06-25 16:11:45.040199069Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06ca1b1e-a74a-460c-9c86-d7fb81631ec5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:11:45 ha-674765 crio[3936]: time="2024-06-25 16:11:45.040981127Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719331905040955272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06ca1b1e-a74a-460c-9c86-d7fb81631ec5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:11:45 ha-674765 crio[3936]: time="2024-06-25 16:11:45.041743862Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0bd2f24d-9bdc-48c2-9fa1-21b12125bb3e name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:11:45 ha-674765 crio[3936]: time="2024-06-25 16:11:45.041830298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0bd2f24d-9bdc-48c2-9fa1-21b12125bb3e name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:11:45 ha-674765 crio[3936]: time="2024-06-25 16:11:45.042321680Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e940f7c608e809152329248d962b7489987764dda2f7576dc8ae2d5448a126b,PodSandboxId:c65aa084da236ba3d7ea0e7917b41dbcdeb30f405ebd8c15df8171e1500f95f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1719331792608349997,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3426ec1f55cfcf657863a2bb9d1d1ed319358c204b3013dc6ea1040ef44ede,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719331679606027881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kubernetes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62eb4fc54721a6cc41ca7c6a7e298bebe70e0bd709ee162f8002bcd99b09f69,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719331650600404866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91160cb26919875a6826440f019fdc7a29c4cb1cca9f728b8634425c5d0d0055,PodSandboxId:a74cac97870233d5be7e99d859054dbe59bb62b62dcebb9b93d53d7d97e6ff21,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719331647935856429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernetes.container.hash: 37a65fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5616a5da2347e48b25e5207a5d383a1b8395ebbdf7444bc962fbab867ccdb3e,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719331645702587343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f1917356d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c1834b73fff1174eded512f55252efedba82c00af1234c13e73a27f339b56c,PodSandboxId:4d0b2a1c727ceef717a7d10522e55f9b21763d10b548f6f8ca153b04c08a6ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1719331628128137056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a01324348bb22c8ce03c490b59b42a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:993db242e335019216c340e823497dc2a88a83153badf0eecd3d96c454418fa2,PodSandboxId:2a19a6bbe06bb54deaff5da581b9b45fcd9494227983dd022d0426bb0ab3ccd9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719331614792099718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:5aa65b67e926f58e42af575f038a6429658821d33a89c7d8113e504ad3e6d174,PodSandboxId:c65aa084da236ba3d7ea0e7917b41dbcdeb30f405ebd8c15df8171e1500f95f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719331614912204262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c227c5cf-2bd6-4ebf-9fdb-09d4229cf421,},Annotations:map[string]string{io.kubernetes.container.hash: 69a09345,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967eda17c
ca156722128d0041068e08832d6bb3264caea0bc5fd19be28bf6525,PodSandboxId:cb4f06a80c952a0b7022ea2ce0a18462d11c45e82ab40aa1a165b789f8a376ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614718978303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca99b81f8f3af37b3a11b2b7acc63b243a795b431477e675b66e1ee8e98320f2,PodSandboxId:b24964e6136b211047c159c805ba1b9d39cffa512722bad477a943783aa84d2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719331614646463810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd04da046aa1783311d35bbb47f916525d9abb5c660cc42d5a6bdebb5c66006,PodSandboxId:bc7e32fc0f4498733a23c791acb9f215a768d0753f5b4a856a40a6f127b6e5fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719331614552451117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302139a799fb523035e8a52ecadeecbc2fbc59026ad9ff69cbc5264b7192ee4d,PodSandboxId:118263dc5302bc116abde0d88cbfa447be7e9d2d76ee9dfe5e54d3287225cdaa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719331614417774935,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 20b6a570fe6dbd4fb43c7d5ee0090fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d8af94261757538ed64ce37812b4b63ab671c65b53859c558794dffd24a708,PodSandboxId:95eb77c224efb384a2d8a7be87f4b501d838237501dd4657d3dfe9e941a351cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719331614473721171,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551c5ef04195f19173
56d613c1c2825c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b5bb08a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e9ec15e9ff71c88b7a6fa2117facf91756b7926806742ce61de5689d4eb2a9a,PodSandboxId:22f98f821bc50e6c29db3ed17ae565a5aaa5284f725895f2745c392ce3d8c318,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719331614384125800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Anno
tations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527ce3a2a7ca580efeb561b1d051220b75eabca666282f31d4b998998c5ae267,PodSandboxId:c1c292c76f6457f213fc624fc352e1d70746b4876bd376bbdfe5c523e7ae157c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719331610134727020,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ntq77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37736a9f-5b4c-421c-9027-81e961ab8550,},Annotations:map[string]string{io.kuberne
tes.container.hash: d770c3b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7837c56cda33edd21808fe9d0441fdd08abd1bdebe8f801a3611412c9f4915,PodSandboxId:d18f421cdb437abaad95182a5581045ed7639dbd944aa4d3b7cbcf8551a67f1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719331123602540696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qjw4r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49031b4f-d04c-44bf-9725-094e7df6945c,},Annotations:map[string]string{io.kubernete
s.container.hash: 37a65fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b,PodSandboxId:2249d5de30294a4411052d912ac663f8b0d2f1f1e010eace066e8eba72cff9f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982140261149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-84zkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f6426f8-a0c4-470c-b2b1-b62fa304c078,},Annotations:map[string]string{io.kubernetes.container.hash: f8bf5066,io.kub
ernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8,PodSandboxId:36a6cd372769cb4e0b61267af34ab214f7e98a894596572c1f18f91b85865fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719330982105192113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7
db6d8ff4d-28db5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426e4a3-2f25-47e9-9b28-b23a81a3a19a,},Annotations:map[string]string{io.kubernetes.container.hash: 370c32f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c,PodSandboxId:41bb01e505abeae0d97e1019e5c33c9523130dd829e516e2ded6ffc9072c534b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b
34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719330979753078901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh9n5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a24539-3168-42cc-93b3-d0b1e283d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: 676d07db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65,PodSandboxId:3498fabc6b53a97d349e73fb2ef8cb3df14eef29ff198836b4363612da9f0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a4
7425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719330960414738558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b7475c05d199411d1b89c7e4e58c52,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32,PodSandboxId:4695ac9edbc507bbbbe372a26cedd099c7de9206dd507a961697b309c7144f1e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1719330960349486669,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-674765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950a69aa02d41e644621ad51e81615e,},Annotations:map[string]string{io.kubernetes.container.hash: ca1dc9e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0bd2f24d-9bdc-48c2-9fa1-21b12125bb3e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5e940f7c608e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       6                   c65aa084da236       storage-provisioner
	0c3426ec1f55c       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago        Running             kindnet-cni               3                   c1c292c76f645       kindnet-ntq77
	b62eb4fc54721       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      4 minutes ago        Running             kube-controller-manager   2                   118263dc5302b       kube-controller-manager-ha-674765
	91160cb269198       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago        Running             busybox                   1                   a74cac9787023       busybox-fc5497c4f-qjw4r
	f5616a5da2347       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      4 minutes ago        Running             kube-apiserver            3                   95eb77c224efb       kube-apiserver-ha-674765
	81c1834b73fff       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago        Running             kube-vip                  0                   4d0b2a1c727ce       kube-vip-ha-674765
	5aa65b67e926f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago        Exited              storage-provisioner       5                   c65aa084da236       storage-provisioner
	993db242e3350       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      4 minutes ago        Running             kube-proxy                1                   2a19a6bbe06bb       kube-proxy-rh9n5
	967eda17cca15       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago        Running             coredns                   1                   cb4f06a80c952       coredns-7db6d8ff4d-84zkt
	ca99b81f8f3af       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago        Running             coredns                   1                   b24964e6136b2       coredns-7db6d8ff4d-28db5
	ccd04da046aa1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago        Running             etcd                      1                   bc7e32fc0f449       etcd-ha-674765
	d9d8af9426175       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      4 minutes ago        Exited              kube-apiserver            2                   95eb77c224efb       kube-apiserver-ha-674765
	302139a799fb5       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      4 minutes ago        Exited              kube-controller-manager   1                   118263dc5302b       kube-controller-manager-ha-674765
	3e9ec15e9ff71       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      4 minutes ago        Running             kube-scheduler            1                   22f98f821bc50       kube-scheduler-ha-674765
	527ce3a2a7ca5       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      4 minutes ago        Exited              kindnet-cni               2                   c1c292c76f645       kindnet-ntq77
	dd7837c56cda3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago       Exited              busybox                   0                   d18f421cdb437       busybox-fc5497c4f-qjw4r
	ec00b1016861e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago       Exited              coredns                   0                   2249d5de30294       coredns-7db6d8ff4d-84zkt
	5dff3834f63a3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago       Exited              coredns                   0                   36a6cd372769c       coredns-7db6d8ff4d-28db5
	7cea2f95fa7a7       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      15 minutes ago       Exited              kube-proxy                0                   41bb01e505abe       kube-proxy-rh9n5
	a7ed432b8fb61       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      15 minutes ago       Exited              kube-scheduler            0                   3498fabc6b53a       kube-scheduler-ha-674765
	e903f61a215f1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      15 minutes ago       Exited              etcd                      0                   4695ac9edbc50       etcd-ha-674765
	
	
	==> coredns [5dff3834f63a382816899f273fe9970c90e171b84aa75c6626dd2435c35f00d8] <==
	[INFO] 10.244.0.4:40292 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069393s
	[INFO] 10.244.0.4:47923 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008723s
	[INFO] 10.244.2.2:43607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173082s
	[INFO] 10.244.2.2:58140 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152475s
	[INFO] 10.244.2.2:58321 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00137128s
	[INFO] 10.244.2.2:51827 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149446s
	[INFO] 10.244.1.2:53516 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091184s
	[INFO] 10.244.1.2:50837 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111518s
	[INFO] 10.244.0.4:36638 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096918s
	[INFO] 10.244.0.4:34420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062938s
	[INFO] 10.244.2.2:47727 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109009s
	[INFO] 10.244.2.2:53547 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114146s
	[INFO] 10.244.2.2:52427 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103325s
	[INFO] 10.244.0.4:35396 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015274s
	[INFO] 10.244.0.4:37070 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000162346s
	[INFO] 10.244.0.4:34499 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000181932s
	[INFO] 10.244.2.2:39406 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141568s
	[INFO] 10.244.2.2:45012 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125003s
	[INFO] 10.244.2.2:37480 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111741s
	[INFO] 10.244.2.2:38163 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160497s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [967eda17cca156722128d0041068e08832d6bb3264caea0bc5fd19be28bf6525] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1035445682]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:07:02.458) (total time: 10001ms):
	Trace[1035445682]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:07:12.460)
	Trace[1035445682]: [10.001587684s] [10.001587684s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34822->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[131637814]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:07:06.222) (total time: 12359ms):
	Trace[131637814]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34822->10.96.0.1:443: read: connection reset by peer 12359ms (16:07:18.581)
	Trace[131637814]: [12.359266363s] [12.359266363s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34822->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40452->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40452->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ca99b81f8f3af37b3a11b2b7acc63b243a795b431477e675b66e1ee8e98320f2] <==
	Trace[2134462864]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (16:07:08.615)
	Trace[2134462864]: [10.00102088s] [10.00102088s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[950250650]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:06:58.656) (total time: 10000ms):
	Trace[950250650]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (16:07:08.656)
	Trace[950250650]: [10.000769831s] [10.000769831s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46754->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[469272890]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:07:06.362) (total time: 12219ms):
	Trace[469272890]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46754->10.96.0.1:443: read: connection reset by peer 12218ms (16:07:18.581)
	Trace[469272890]: [12.219041212s] [12.219041212s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46754->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ec00b1016861e1acd703a40af2b274a6d7bd1b0b9c8e37a463cd46994e27ce0b] <==
	[INFO] 10.244.1.2:57875 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221098s
	[INFO] 10.244.1.2:50144 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003188543s
	[INFO] 10.244.1.2:52779 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142142s
	[INFO] 10.244.0.4:54632 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118741s
	[INFO] 10.244.0.4:42979 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001269082s
	[INFO] 10.244.0.4:36713 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084451s
	[INFO] 10.244.2.2:41583 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001597985s
	[INFO] 10.244.2.2:38518 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007901s
	[INFO] 10.244.2.2:36859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163343s
	[INFO] 10.244.2.2:48049 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012051s
	[INFO] 10.244.1.2:41596 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099989s
	[INFO] 10.244.1.2:53657 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152026s
	[INFO] 10.244.0.4:37328 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010546s
	[INFO] 10.244.0.4:37107 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078111s
	[INFO] 10.244.2.2:58260 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109644s
	[INFO] 10.244.1.2:51838 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161138s
	[INFO] 10.244.1.2:34544 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000245952s
	[INFO] 10.244.1.2:41848 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133045s
	[INFO] 10.244.1.2:55838 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000180767s
	[INFO] 10.244.0.4:56384 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068132s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-674765
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_25T15_56_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:56:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:11:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 16:10:25 +0000   Tue, 25 Jun 2024 16:10:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 16:10:25 +0000   Tue, 25 Jun 2024 16:10:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 16:10:25 +0000   Tue, 25 Jun 2024 16:10:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 16:10:25 +0000   Tue, 25 Jun 2024 16:10:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-674765
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9f74a4b042742c8a0ef29e697c6459c
	  System UUID:                b9f74a4b-0427-42c8-a0ef-29e697c6459c
	  Boot ID:                    52ea2189-696e-4985-bf6b-90448e3e85aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qjw4r              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-28db5             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-84zkt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-674765                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-ntq77                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-674765             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-674765    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-rh9n5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-674765             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-674765                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4m7s               kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-674765 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-674765 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-674765 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Warning  ContainerGCFailed        5m39s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m4s               node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Normal   RegisteredNode           4m3s               node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Normal   RegisteredNode           3m11s              node-controller  Node ha-674765 event: Registered Node ha-674765 in Controller
	  Normal   NodeNotReady             104s               node-controller  Node ha-674765 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     80s (x2 over 15m)  kubelet          Node ha-674765 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    80s (x2 over 15m)  kubelet          Node ha-674765 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                80s (x2 over 15m)  kubelet          Node ha-674765 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  80s (x2 over 15m)  kubelet          Node ha-674765 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-674765-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_25T15_57_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:57:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:11:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 16:08:10 +0000   Tue, 25 Jun 2024 16:07:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 16:08:10 +0000   Tue, 25 Jun 2024 16:07:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 16:08:10 +0000   Tue, 25 Jun 2024 16:07:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 16:08:10 +0000   Tue, 25 Jun 2024 16:07:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-674765-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 45ee8176fa3149fdb7e4bac2256c26b7
	  System UUID:                45ee8176-fa31-49fd-b7e4-bac2256c26b7
	  Boot ID:                    1e188258-37ae-4518-b693-d29f05e5ab3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jx6j4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-674765-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-kkgdq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-674765-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-674765-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-lsmft                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-674765-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-674765-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-674765-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-674765-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-674765-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-674765-m02 status is now: NodeNotReady
	  Normal  Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m36s (x8 over 4m36s)  kubelet          Node ha-674765-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s (x8 over 4m36s)  kubelet          Node ha-674765-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s (x7 over 4m36s)  kubelet          Node ha-674765-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-674765-m02 event: Registered Node ha-674765-m02 in Controller
	
	
	Name:               ha-674765-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-674765-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=ha-674765
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_25T15_59_18_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 15:59:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-674765-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:09:18 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 25 Jun 2024 16:08:58 +0000   Tue, 25 Jun 2024 16:10:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 25 Jun 2024 16:08:58 +0000   Tue, 25 Jun 2024 16:10:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 25 Jun 2024 16:08:58 +0000   Tue, 25 Jun 2024 16:10:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 25 Jun 2024 16:08:58 +0000   Tue, 25 Jun 2024 16:10:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-674765-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 153487087a1a4805965ecc96230ab164
	  System UUID:                15348708-7a1a-4805-965e-cc96230ab164
	  Boot ID:                    d3c3f5a1-b9d4-4e7b-99f0-e8e93bd038b0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8qcp8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-6z24k              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-szzwh           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-674765-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-674765-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-674765-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-674765-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m4s                   node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal   RegisteredNode           4m3s                   node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal   RegisteredNode           3m11s                  node-controller  Node ha-674765-m04 event: Registered Node ha-674765-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-674765-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-674765-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-674765-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-674765-m04 has been rebooted, boot id: d3c3f5a1-b9d4-4e7b-99f0-e8e93bd038b0
	  Normal   NodeReady                2m47s                  kubelet          Node ha-674765-m04 status is now: NodeReady
	  Normal   NodeNotReady             104s (x2 over 3m24s)   node-controller  Node ha-674765-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +10.515677] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.054245] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062657] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.163326] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.122319] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.250574] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.069829] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +3.840914] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.060181] kauditd_printk_skb: 158 callbacks suppressed
	[Jun25 15:56] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +0.085967] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.321595] kauditd_printk_skb: 21 callbacks suppressed
	[Jun25 15:57] kauditd_printk_skb: 74 callbacks suppressed
	[Jun25 16:03] kauditd_printk_skb: 1 callbacks suppressed
	[Jun25 16:06] systemd-fstab-generator[3857]: Ignoring "noauto" option for root device
	[  +0.148239] systemd-fstab-generator[3869]: Ignoring "noauto" option for root device
	[  +0.171292] systemd-fstab-generator[3883]: Ignoring "noauto" option for root device
	[  +0.147208] systemd-fstab-generator[3895]: Ignoring "noauto" option for root device
	[  +0.265636] systemd-fstab-generator[3923]: Ignoring "noauto" option for root device
	[  +4.883560] systemd-fstab-generator[4024]: Ignoring "noauto" option for root device
	[  +0.083360] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.897496] kauditd_printk_skb: 22 callbacks suppressed
	[Jun25 16:07] kauditd_printk_skb: 87 callbacks suppressed
	[ +10.060166] kauditd_printk_skb: 2 callbacks suppressed
	[ +22.057807] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [ccd04da046aa1783311d35bbb47f916525d9abb5c660cc42d5a6bdebb5c66006] <==
	{"level":"info","ts":"2024-06-25T16:08:17.175673Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:08:17.178162Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:08:17.203712Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fa515506e66f6916","to":"861dd526078d031b","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-25T16:08:17.203815Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:08:17.204645Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fa515506e66f6916","to":"861dd526078d031b","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-25T16:08:17.20474Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"warn","ts":"2024-06-25T16:08:21.511665Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"ce369a7c509ac3e5","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"48.405328ms"}
	{"level":"warn","ts":"2024-06-25T16:08:21.511933Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"861dd526078d031b","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"48.682513ms"}
	{"level":"info","ts":"2024-06-25T16:09:11.267014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 switched to configuration voters=(14859233879274472421 18037291470719772950)"}
	{"level":"info","ts":"2024-06-25T16:09:11.269173Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"b64da5b92548cbb8","local-member-id":"fa515506e66f6916","removed-remote-peer-id":"861dd526078d031b","removed-remote-peer-urls":["https://192.168.39.77:2380"]}
	{"level":"info","ts":"2024-06-25T16:09:11.269308Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"861dd526078d031b"}
	{"level":"warn","ts":"2024-06-25T16:09:11.269717Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:09:11.269802Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"861dd526078d031b"}
	{"level":"warn","ts":"2024-06-25T16:09:11.270214Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:09:11.270296Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:09:11.270588Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"warn","ts":"2024-06-25T16:09:11.270772Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b","error":"context canceled"}
	{"level":"warn","ts":"2024-06-25T16:09:11.270944Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"861dd526078d031b","error":"failed to read 861dd526078d031b on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-06-25T16:09:11.271015Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"warn","ts":"2024-06-25T16:09:11.271229Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-06-25T16:09:11.271284Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:09:11.271324Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:09:11.271362Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"fa515506e66f6916","removed-remote-peer-id":"861dd526078d031b"}
	{"level":"warn","ts":"2024-06-25T16:09:11.288242Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"fa515506e66f6916","remote-peer-id-stream-handler":"fa515506e66f6916","remote-peer-id-from":"861dd526078d031b"}
	{"level":"warn","ts":"2024-06-25T16:09:11.295504Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.77:44506","server-name":"","error":"EOF"}
	
	
	==> etcd [e903f61a215f1423f9e270e19b11a7357fee578dc62e4dd059dbe9c47a999c32] <==
	2024/06/25 16:05:10 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/06/25 16:05:10 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/06/25 16:05:10 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/06/25 16:05:10 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/06/25 16:05:10 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-25T16:05:11.002937Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.128:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-25T16:05:11.003102Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.128:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-25T16:05:11.003185Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fa515506e66f6916","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-06-25T16:05:11.003346Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ce369a7c509ac3e5"}
	{"level":"info","ts":"2024-06-25T16:05:11.003412Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ce369a7c509ac3e5"}
	{"level":"info","ts":"2024-06-25T16:05:11.003539Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ce369a7c509ac3e5"}
	{"level":"info","ts":"2024-06-25T16:05:11.003747Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5"}
	{"level":"info","ts":"2024-06-25T16:05:11.003805Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5"}
	{"level":"info","ts":"2024-06-25T16:05:11.003858Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"ce369a7c509ac3e5"}
	{"level":"info","ts":"2024-06-25T16:05:11.003931Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ce369a7c509ac3e5"}
	{"level":"info","ts":"2024-06-25T16:05:11.003944Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:05:11.003957Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:05:11.00399Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:05:11.004128Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:05:11.004173Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:05:11.004217Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:05:11.004244Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"861dd526078d031b"}
	{"level":"info","ts":"2024-06-25T16:05:11.006807Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.128:2380"}
	{"level":"info","ts":"2024-06-25T16:05:11.006967Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.128:2380"}
	{"level":"info","ts":"2024-06-25T16:05:11.006994Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-674765","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.128:2380"],"advertise-client-urls":["https://192.168.39.128:2379"]}
	
	
	==> kernel <==
	 16:11:45 up 16 min,  0 users,  load average: 0.09, 0.22, 0.17
	Linux ha-674765 5.10.207 #1 SMP Mon Jun 24 21:03:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0c3426ec1f55cfcf657863a2bb9d1d1ed319358c204b3013dc6ea1040ef44ede] <==
	I0625 16:11:00.794258       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	I0625 16:11:10.801166       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0625 16:11:10.801205       1 main.go:227] handling current node
	I0625 16:11:10.801215       1 main.go:223] Handling node with IPs: map[192.168.39.53:{}]
	I0625 16:11:10.801220       1 main.go:250] Node ha-674765-m02 has CIDR [10.244.1.0/24] 
	I0625 16:11:10.801332       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0625 16:11:10.801364       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	I0625 16:11:20.808140       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0625 16:11:20.808177       1 main.go:227] handling current node
	I0625 16:11:20.808187       1 main.go:223] Handling node with IPs: map[192.168.39.53:{}]
	I0625 16:11:20.808192       1 main.go:250] Node ha-674765-m02 has CIDR [10.244.1.0/24] 
	I0625 16:11:20.808293       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0625 16:11:20.808323       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	I0625 16:11:30.818607       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0625 16:11:30.818664       1 main.go:227] handling current node
	I0625 16:11:30.818675       1 main.go:223] Handling node with IPs: map[192.168.39.53:{}]
	I0625 16:11:30.818680       1 main.go:250] Node ha-674765-m02 has CIDR [10.244.1.0/24] 
	I0625 16:11:30.819139       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0625 16:11:30.819184       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	I0625 16:11:40.836342       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0625 16:11:40.836512       1 main.go:227] handling current node
	I0625 16:11:40.836547       1 main.go:223] Handling node with IPs: map[192.168.39.53:{}]
	I0625 16:11:40.836580       1 main.go:250] Node ha-674765-m02 has CIDR [10.244.1.0/24] 
	I0625 16:11:40.836796       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0625 16:11:40.836943       1 main.go:250] Node ha-674765-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [527ce3a2a7ca580efeb561b1d051220b75eabca666282f31d4b998998c5ae267] <==
	I0625 16:06:50.598180       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0625 16:06:50.598248       1 main.go:107] hostIP = 192.168.39.128
	podIP = 192.168.39.128
	I0625 16:06:50.598427       1 main.go:116] setting mtu 1500 for CNI 
	I0625 16:06:50.598478       1 main.go:146] kindnetd IP family: "ipv4"
	I0625 16:06:50.598503       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0625 16:06:50.900242       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0625 16:06:54.005171       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0625 16:06:57.077342       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0625 16:07:09.087572       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0625 16:07:12.437313       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kube-apiserver [d9d8af94261757538ed64ce37812b4b63ab671c65b53859c558794dffd24a708] <==
	I0625 16:06:55.111328       1 options.go:221] external host was not specified, using 192.168.39.128
	I0625 16:06:55.112616       1 server.go:148] Version: v1.30.2
	I0625 16:06:55.112642       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:06:55.449297       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0625 16:06:55.464953       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0625 16:06:55.484409       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0625 16:06:55.484450       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0625 16:06:55.484686       1 instance.go:299] Using reconciler: lease
	W0625 16:07:15.448619       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0625 16:07:15.448621       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0625 16:07:15.485841       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f5616a5da2347e48b25e5207a5d383a1b8395ebbdf7444bc962fbab867ccdb3e] <==
	I0625 16:07:27.635190       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0625 16:07:27.704119       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0625 16:07:27.720516       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0625 16:07:27.720566       1 policy_source.go:224] refreshing policies
	I0625 16:07:27.732945       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0625 16:07:27.733489       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0625 16:07:27.734081       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0625 16:07:27.734154       1 aggregator.go:165] initial CRD sync complete...
	I0625 16:07:27.734190       1 autoregister_controller.go:141] Starting autoregister controller
	I0625 16:07:27.734196       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0625 16:07:27.734201       1 cache.go:39] Caches are synced for autoregister controller
	I0625 16:07:27.737179       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0625 16:07:27.737211       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0625 16:07:27.737272       1 shared_informer.go:320] Caches are synced for configmaps
	I0625 16:07:27.737313       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0625 16:07:27.744159       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0625 16:07:27.762741       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.77]
	I0625 16:07:27.764433       1 controller.go:615] quota admission added evaluator for: endpoints
	I0625 16:07:27.784024       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0625 16:07:27.795817       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0625 16:07:27.807760       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0625 16:07:28.643442       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0625 16:07:29.029672       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.128 192.168.39.77]
	W0625 16:07:49.026748       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.128 192.168.39.53]
	W0625 16:09:19.031950       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.128 192.168.39.53]
	
	
	==> kube-controller-manager [302139a799fb523035e8a52ecadeecbc2fbc59026ad9ff69cbc5264b7192ee4d] <==
	I0625 16:06:55.835731       1 serving.go:380] Generated self-signed cert in-memory
	I0625 16:06:56.077772       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0625 16:06:56.077959       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:06:56.079679       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0625 16:06:56.080227       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0625 16:06:56.080530       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0625 16:06:56.081092       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0625 16:07:16.492188       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.128:8443/healthz\": dial tcp 192.168.39.128:8443: connect: connection refused"
	
	
	==> kube-controller-manager [b62eb4fc54721a6cc41ca7c6a7e298bebe70e0bd709ee162f8002bcd99b09f69] <==
	E0625 16:09:42.239933       1 gc_controller.go:153] "Failed to get node" err="node \"ha-674765-m03\" not found" logger="pod-garbage-collector-controller" node="ha-674765-m03"
	E0625 16:09:42.239938       1 gc_controller.go:153] "Failed to get node" err="node \"ha-674765-m03\" not found" logger="pod-garbage-collector-controller" node="ha-674765-m03"
	I0625 16:10:01.654621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.059514ms"
	I0625 16:10:01.654747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.464µs"
	I0625 16:10:01.751736       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.772507ms"
	I0625 16:10:01.751930       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.925µs"
	I0625 16:10:01.795552       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="119.72873ms"
	I0625 16:10:01.795684       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.242µs"
	I0625 16:10:01.861939       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.236283ms"
	I0625 16:10:01.862185       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="144.621µs"
	E0625 16:10:02.240563       1 gc_controller.go:153] "Failed to get node" err="node \"ha-674765-m03\" not found" logger="pod-garbage-collector-controller" node="ha-674765-m03"
	E0625 16:10:02.240663       1 gc_controller.go:153] "Failed to get node" err="node \"ha-674765-m03\" not found" logger="pod-garbage-collector-controller" node="ha-674765-m03"
	E0625 16:10:02.240690       1 gc_controller.go:153] "Failed to get node" err="node \"ha-674765-m03\" not found" logger="pod-garbage-collector-controller" node="ha-674765-m03"
	E0625 16:10:02.240713       1 gc_controller.go:153] "Failed to get node" err="node \"ha-674765-m03\" not found" logger="pod-garbage-collector-controller" node="ha-674765-m03"
	E0625 16:10:02.240736       1 gc_controller.go:153] "Failed to get node" err="node \"ha-674765-m03\" not found" logger="pod-garbage-collector-controller" node="ha-674765-m03"
	I0625 16:10:26.699217       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-p7svw EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-p7svw\": the object has been modified; please apply your changes to the latest version and try again"
	I0625 16:10:26.700134       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bfa70875-b690-4c96-9121-8273f9c838bf", APIVersion:"v1", ResourceVersion:"297", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-p7svw EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-p7svw": the object has been modified; please apply your changes to the latest version and try again
	I0625 16:10:26.711510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.093034ms"
	I0625 16:10:26.711653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.191µs"
	I0625 16:10:26.830426       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-p7svw EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-p7svw\": the object has been modified; please apply your changes to the latest version and try again"
	I0625 16:10:26.830721       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bfa70875-b690-4c96-9121-8273f9c838bf", APIVersion:"v1", ResourceVersion:"297", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-p7svw EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-p7svw": the object has been modified; please apply your changes to the latest version and try again
	I0625 16:10:26.876150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.31226ms"
	I0625 16:10:26.876335       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="121.152µs"
	I0625 16:10:26.888067       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.767734ms"
	I0625 16:10:26.888146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.081µs"
	
	
	==> kube-proxy [7cea2f95fa7a7ef2b8d281a5e1b9c59c317c03084458fe57df036d763e43180c] <==
	E0625 16:03:49.109623       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-674765&resourceVersion=1748": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:03:49.109413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:03:49.109777       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:03:49.109479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:03:49.109832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:03:57.365328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:03:57.365406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:03:57.365484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-674765&resourceVersion=1748": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:03:57.365588       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-674765&resourceVersion=1748": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:03:57.365482       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:03:57.365737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:04:06.774943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-674765&resourceVersion=1748": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:04:06.775074       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-674765&resourceVersion=1748": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:04:09.846658       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:04:09.847254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:04:09.847085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:04:09.847431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:04:22.133410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-674765&resourceVersion=1748": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:04:22.133536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-674765&resourceVersion=1748": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:04:28.278763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:04:28.279464       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:04:28.278853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:04:28.279689       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0625 16:05:05.141391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	E0625 16:05:05.141505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1740": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [993db242e335019216c340e823497dc2a88a83153badf0eecd3d96c454418fa2] <==
	I0625 16:06:56.176637       1 server_linux.go:69] "Using iptables proxy"
	E0625 16:06:58.807608       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-674765\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0625 16:07:01.877269       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-674765\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0625 16:07:04.950072       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-674765\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0625 16:07:11.094801       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-674765\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0625 16:07:20.309851       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-674765\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0625 16:07:38.102216       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.128"]
	I0625 16:07:38.148167       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0625 16:07:38.148247       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0625 16:07:38.148264       1 server_linux.go:165] "Using iptables Proxier"
	I0625 16:07:38.151177       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0625 16:07:38.151423       1 server.go:872] "Version info" version="v1.30.2"
	I0625 16:07:38.151449       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:07:38.152684       1 config.go:192] "Starting service config controller"
	I0625 16:07:38.152744       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0625 16:07:38.152774       1 config.go:101] "Starting endpoint slice config controller"
	I0625 16:07:38.152796       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0625 16:07:38.153419       1 config.go:319] "Starting node config controller"
	I0625 16:07:38.153448       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0625 16:07:38.253939       1 shared_informer.go:320] Caches are synced for node config
	I0625 16:07:38.253986       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0625 16:07:38.254044       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [3e9ec15e9ff71c88b7a6fa2117facf91756b7926806742ce61de5689d4eb2a9a] <==
	W0625 16:07:24.144415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.128:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:24.144481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.128:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:24.277096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.128:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:24.277175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.128:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:24.303232       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.128:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:24.303313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.128:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:24.759105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:24.759169       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:24.789029       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:24.789084       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:24.919397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.128:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:24.919506       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.128:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:25.133752       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.128:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:25.133849       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.128:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:25.148528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:25.148616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:25.374498       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:25.374593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0625 16:07:25.411455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.128:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0625 16:07:25.411578       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.128:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	I0625 16:07:27.697340       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0625 16:09:07.932567       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-8qcp8\": pod busybox-fc5497c4f-8qcp8 is already assigned to node \"ha-674765-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-8qcp8" node="ha-674765-m04"
	E0625 16:09:07.933192       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod cebf6b61-6d8d-4bf3-af80-3a709f6a3d68(default/busybox-fc5497c4f-8qcp8) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-8qcp8"
	E0625 16:09:07.933391       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-8qcp8\": pod busybox-fc5497c4f-8qcp8 is already assigned to node \"ha-674765-m04\"" pod="default/busybox-fc5497c4f-8qcp8"
	I0625 16:09:07.933484       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-8qcp8" node="ha-674765-m04"
	
	
	==> kube-scheduler [a7ed432b8fb613e5494924e83c46de1bfe881b19bdb28b693da5298cf99f2e65] <==
	W0625 16:05:06.844080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0625 16:05:06.844118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0625 16:05:07.165973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0625 16:05:07.166057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0625 16:05:07.351656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0625 16:05:07.351701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0625 16:05:07.580369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0625 16:05:07.580462       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0625 16:05:07.902275       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0625 16:05:07.902363       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0625 16:05:07.905108       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0625 16:05:07.905175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0625 16:05:07.991929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0625 16:05:07.991978       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0625 16:05:08.060392       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0625 16:05:08.060565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0625 16:05:08.060509       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0625 16:05:08.060650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0625 16:05:08.484304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0625 16:05:08.484447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0625 16:05:08.633961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0625 16:05:08.633993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0625 16:05:09.435926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0625 16:05:09.436019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0625 16:05:10.901721       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 25 16:10:06 ha-674765 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:10:06 ha-674765 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:10:06 ha-674765 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:10:06 ha-674765 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 25 16:10:08 ha-674765 kubelet[1375]: E0625 16:10:08.861284    1375 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-674765?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jun 25 16:10:15 ha-674765 kubelet[1375]: E0625 16:10:15.091310    1375 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-674765\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-674765?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jun 25 16:10:15 ha-674765 kubelet[1375]: E0625 16:10:15.091362    1375 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jun 25 16:10:18 ha-674765 kubelet[1375]: W0625 16:10:18.414946    1375 reflector.go:470] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jun 25 16:10:18 ha-674765 kubelet[1375]: I0625 16:10:18.414952    1375 status_manager.go:853] "Failed to get status for pod" podUID="c227c5cf-2bd6-4ebf-9fdb-09d4229cf421" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": http2: client connection lost"
	Jun 25 16:10:18 ha-674765 kubelet[1375]: E0625 16:10:18.415034    1375 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-674765?timeout=10s\": http2: client connection lost"
	Jun 25 16:10:18 ha-674765 kubelet[1375]: I0625 16:10:18.415057    1375 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Jun 25 16:10:18 ha-674765 kubelet[1375]: W0625 16:10:18.415378    1375 reflector.go:470] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jun 25 16:10:18 ha-674765 kubelet[1375]: W0625 16:10:18.415410    1375 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jun 25 16:10:18 ha-674765 kubelet[1375]: W0625 16:10:18.415431    1375 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jun 25 16:10:18 ha-674765 kubelet[1375]: W0625 16:10:18.415458    1375 reflector.go:470] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jun 25 16:10:18 ha-674765 kubelet[1375]: W0625 16:10:18.415486    1375 reflector.go:470] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jun 25 16:10:18 ha-674765 kubelet[1375]: W0625 16:10:18.415559    1375 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jun 25 16:10:18 ha-674765 kubelet[1375]: W0625 16:10:18.415585    1375 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jun 25 16:10:18 ha-674765 kubelet[1375]: W0625 16:10:18.415612    1375 reflector.go:470] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jun 25 16:10:19 ha-674765 kubelet[1375]: I0625 16:10:19.862527    1375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-674765" podStartSLOduration=113.862440598 podStartE2EDuration="1m53.862440598s" podCreationTimestamp="2024-06-25 16:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:08:36.605701918 +0000 UTC m=+750.187450636" watchObservedRunningTime="2024-06-25 16:10:19.862440598 +0000 UTC m=+853.444189319"
	Jun 25 16:11:06 ha-674765 kubelet[1375]: E0625 16:11:06.613536    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 16:11:06 ha-674765 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:11:06 ha-674765 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:11:06 ha-674765 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:11:06 ha-674765 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0625 16:11:44.595957   44683 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19128-13846/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-674765 -n ha-674765
helpers_test.go:261: (dbg) Run:  kubectl --context ha-674765 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (304.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-552402
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-552402
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-552402: exit status 82 (2m1.98538497s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-552402-m03"  ...
	* Stopping node "multinode-552402-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-552402" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-552402 --wait=true -v=8 --alsologtostderr
E0625 16:29:29.128117   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-552402 --wait=true -v=8 --alsologtostderr: (3m0.584517749s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-552402
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-552402 -n multinode-552402
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-552402 logs -n 25: (1.428557314s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-552402 cp multinode-552402-m02:/home/docker/cp-test.txt                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1179120027/001/cp-test_multinode-552402-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-552402 cp multinode-552402-m02:/home/docker/cp-test.txt                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402:/home/docker/cp-test_multinode-552402-m02_multinode-552402.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n multinode-552402 sudo cat                                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | /home/docker/cp-test_multinode-552402-m02_multinode-552402.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-552402 cp multinode-552402-m02:/home/docker/cp-test.txt                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m03:/home/docker/cp-test_multinode-552402-m02_multinode-552402-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n multinode-552402-m03 sudo cat                                   | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | /home/docker/cp-test_multinode-552402-m02_multinode-552402-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-552402 cp testdata/cp-test.txt                                                | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-552402 cp multinode-552402-m03:/home/docker/cp-test.txt                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1179120027/001/cp-test_multinode-552402-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-552402 cp multinode-552402-m03:/home/docker/cp-test.txt                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402:/home/docker/cp-test_multinode-552402-m03_multinode-552402.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n multinode-552402 sudo cat                                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | /home/docker/cp-test_multinode-552402-m03_multinode-552402.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-552402 cp multinode-552402-m03:/home/docker/cp-test.txt                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m02:/home/docker/cp-test_multinode-552402-m03_multinode-552402-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n multinode-552402-m02 sudo cat                                   | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | /home/docker/cp-test_multinode-552402-m03_multinode-552402-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-552402 node stop m03                                                          | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	| node    | multinode-552402 node start                                                             | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-552402                                                                | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC |                     |
	| stop    | -p multinode-552402                                                                     | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC |                     |
	| start   | -p multinode-552402                                                                     | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:27 UTC | 25 Jun 24 16:30 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-552402                                                                | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:30 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/25 16:27:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0625 16:27:53.537008   54199 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:27:53.537264   54199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:27:53.537274   54199 out.go:304] Setting ErrFile to fd 2...
	I0625 16:27:53.537277   54199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:27:53.537470   54199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:27:53.537944   54199 out.go:298] Setting JSON to false
	I0625 16:27:53.538775   54199 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7818,"bootTime":1719325056,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 16:27:53.538827   54199 start.go:139] virtualization: kvm guest
	I0625 16:27:53.541156   54199 out.go:177] * [multinode-552402] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0625 16:27:53.542602   54199 out.go:177]   - MINIKUBE_LOCATION=19128
	I0625 16:27:53.542604   54199 notify.go:220] Checking for updates...
	I0625 16:27:53.543974   54199 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 16:27:53.545417   54199 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 16:27:53.546821   54199 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:27:53.548369   54199 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0625 16:27:53.549655   54199 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0625 16:27:53.551198   54199 config.go:182] Loaded profile config "multinode-552402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:27:53.551292   54199 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 16:27:53.551820   54199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:27:53.551899   54199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:27:53.566642   54199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
	I0625 16:27:53.567031   54199 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:27:53.567534   54199 main.go:141] libmachine: Using API Version  1
	I0625 16:27:53.567550   54199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:27:53.567883   54199 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:27:53.568066   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:27:53.601929   54199 out.go:177] * Using the kvm2 driver based on existing profile
	I0625 16:27:53.603111   54199 start.go:297] selected driver: kvm2
	I0625 16:27:53.603131   54199 start.go:901] validating driver "kvm2" against &{Name:multinode-552402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-552402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.177 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:27:53.603290   54199 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0625 16:27:53.603650   54199 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:27:53.603739   54199 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19128-13846/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0625 16:27:53.617998   54199 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0625 16:27:53.618656   54199 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 16:27:53.618680   54199 cni.go:84] Creating CNI manager for ""
	I0625 16:27:53.618688   54199 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0625 16:27:53.618754   54199 start.go:340] cluster config:
	{Name:multinode-552402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-552402 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.177 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:27:53.618890   54199 iso.go:125] acquiring lock: {Name:mk76df652d5e768afc73443035d5ecb8b75ed16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:27:53.620615   54199 out.go:177] * Starting "multinode-552402" primary control-plane node in "multinode-552402" cluster
	I0625 16:27:53.621862   54199 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 16:27:53.621889   54199 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0625 16:27:53.621901   54199 cache.go:56] Caching tarball of preloaded images
	I0625 16:27:53.621981   54199 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 16:27:53.621995   54199 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0625 16:27:53.622127   54199 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/config.json ...
	I0625 16:27:53.622341   54199 start.go:360] acquireMachinesLock for multinode-552402: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 16:27:53.622393   54199 start.go:364] duration metric: took 28.95µs to acquireMachinesLock for "multinode-552402"
	I0625 16:27:53.622412   54199 start.go:96] Skipping create...Using existing machine configuration
	I0625 16:27:53.622421   54199 fix.go:54] fixHost starting: 
	I0625 16:27:53.622729   54199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:27:53.622758   54199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:27:53.636140   54199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0625 16:27:53.636552   54199 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:27:53.636993   54199 main.go:141] libmachine: Using API Version  1
	I0625 16:27:53.637006   54199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:27:53.637268   54199 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:27:53.637449   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:27:53.637559   54199 main.go:141] libmachine: (multinode-552402) Calling .GetState
	I0625 16:27:53.639040   54199 fix.go:112] recreateIfNeeded on multinode-552402: state=Running err=<nil>
	W0625 16:27:53.639067   54199 fix.go:138] unexpected machine state, will restart: <nil>
	I0625 16:27:53.641601   54199 out.go:177] * Updating the running kvm2 "multinode-552402" VM ...
	I0625 16:27:53.642919   54199 machine.go:94] provisionDockerMachine start ...
	I0625 16:27:53.642941   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:27:53.643136   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:27:53.645571   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:53.646017   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:27:53.646049   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:53.646192   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:27:53.646359   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:53.646513   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:53.646644   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:27:53.646793   54199 main.go:141] libmachine: Using SSH client type: native
	I0625 16:27:53.647038   54199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0625 16:27:53.647054   54199 main.go:141] libmachine: About to run SSH command:
	hostname
	I0625 16:27:53.759653   54199 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-552402
	
	I0625 16:27:53.759680   54199 main.go:141] libmachine: (multinode-552402) Calling .GetMachineName
	I0625 16:27:53.759902   54199 buildroot.go:166] provisioning hostname "multinode-552402"
	I0625 16:27:53.759924   54199 main.go:141] libmachine: (multinode-552402) Calling .GetMachineName
	I0625 16:27:53.760089   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:27:53.762561   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:53.763003   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:27:53.763033   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:53.763153   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:27:53.763330   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:53.763468   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:53.763609   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:27:53.763771   54199 main.go:141] libmachine: Using SSH client type: native
	I0625 16:27:53.763970   54199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0625 16:27:53.763983   54199 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-552402 && echo "multinode-552402" | sudo tee /etc/hostname
	I0625 16:27:53.890422   54199 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-552402
	
	I0625 16:27:53.890451   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:27:53.893359   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:53.893695   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:27:53.893733   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:53.893896   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:27:53.894123   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:53.894283   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:53.894449   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:27:53.894594   54199 main.go:141] libmachine: Using SSH client type: native
	I0625 16:27:53.894769   54199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0625 16:27:53.894792   54199 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-552402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-552402/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-552402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 16:27:54.003658   54199 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 16:27:54.003697   54199 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 16:27:54.003723   54199 buildroot.go:174] setting up certificates
	I0625 16:27:54.003736   54199 provision.go:84] configureAuth start
	I0625 16:27:54.003750   54199 main.go:141] libmachine: (multinode-552402) Calling .GetMachineName
	I0625 16:27:54.003989   54199 main.go:141] libmachine: (multinode-552402) Calling .GetIP
	I0625 16:27:54.006804   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.007181   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:27:54.007211   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.007378   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:27:54.009388   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.009702   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:27:54.009729   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.009837   54199 provision.go:143] copyHostCerts
	I0625 16:27:54.009865   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 16:27:54.009903   54199 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem, removing ...
	I0625 16:27:54.009912   54199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 16:27:54.009975   54199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 16:27:54.010080   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 16:27:54.010098   54199 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem, removing ...
	I0625 16:27:54.010103   54199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 16:27:54.010130   54199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 16:27:54.010222   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 16:27:54.010242   54199 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem, removing ...
	I0625 16:27:54.010250   54199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 16:27:54.010273   54199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 16:27:54.010334   54199 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.multinode-552402 san=[127.0.0.1 192.168.39.231 localhost minikube multinode-552402]
	I0625 16:27:54.108999   54199 provision.go:177] copyRemoteCerts
	I0625 16:27:54.109050   54199 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 16:27:54.109072   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:27:54.111627   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.111975   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:27:54.112014   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.112136   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:27:54.112305   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:54.112445   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:27:54.112567   54199 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/multinode-552402/id_rsa Username:docker}
	I0625 16:27:54.196809   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0625 16:27:54.196893   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0625 16:27:54.221770   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0625 16:27:54.221817   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0625 16:27:54.245559   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0625 16:27:54.245626   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 16:27:54.269177   54199 provision.go:87] duration metric: took 265.426707ms to configureAuth
	I0625 16:27:54.269205   54199 buildroot.go:189] setting minikube options for container-runtime
	I0625 16:27:54.269412   54199 config.go:182] Loaded profile config "multinode-552402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:27:54.269476   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:27:54.272193   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.272627   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:27:54.272655   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.272842   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:27:54.272985   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:54.273157   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:54.273334   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:27:54.273454   54199 main.go:141] libmachine: Using SSH client type: native
	I0625 16:27:54.273596   54199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0625 16:27:54.273609   54199 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 16:29:25.062515   54199 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 16:29:25.062546   54199 machine.go:97] duration metric: took 1m31.419612639s to provisionDockerMachine
	I0625 16:29:25.062558   54199 start.go:293] postStartSetup for "multinode-552402" (driver="kvm2")
	I0625 16:29:25.062569   54199 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 16:29:25.062584   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:29:25.062926   54199 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 16:29:25.062964   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:29:25.065780   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.066307   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:29:25.066336   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.066463   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:29:25.066660   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:29:25.066820   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:29:25.066956   54199 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/multinode-552402/id_rsa Username:docker}
	I0625 16:29:25.153941   54199 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 16:29:25.158216   54199 command_runner.go:130] > NAME=Buildroot
	I0625 16:29:25.158236   54199 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0625 16:29:25.158240   54199 command_runner.go:130] > ID=buildroot
	I0625 16:29:25.158245   54199 command_runner.go:130] > VERSION_ID=2023.02.9
	I0625 16:29:25.158250   54199 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0625 16:29:25.158277   54199 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 16:29:25.158286   54199 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 16:29:25.158339   54199 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 16:29:25.158424   54199 filesync.go:149] local asset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> 212392.pem in /etc/ssl/certs
	I0625 16:29:25.158436   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /etc/ssl/certs/212392.pem
	I0625 16:29:25.158554   54199 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0625 16:29:25.167710   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:29:25.192683   54199 start.go:296] duration metric: took 130.112778ms for postStartSetup
	I0625 16:29:25.192723   54199 fix.go:56] duration metric: took 1m31.570301433s for fixHost
	I0625 16:29:25.192771   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:29:25.195558   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.195974   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:29:25.196003   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.196157   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:29:25.196388   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:29:25.196565   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:29:25.196725   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:29:25.196934   54199 main.go:141] libmachine: Using SSH client type: native
	I0625 16:29:25.197095   54199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0625 16:29:25.197106   54199 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0625 16:29:25.303367   54199 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719332965.283343000
	
	I0625 16:29:25.303387   54199 fix.go:216] guest clock: 1719332965.283343000
	I0625 16:29:25.303393   54199 fix.go:229] Guest: 2024-06-25 16:29:25.283343 +0000 UTC Remote: 2024-06-25 16:29:25.192728326 +0000 UTC m=+91.687898674 (delta=90.614674ms)
	I0625 16:29:25.303414   54199 fix.go:200] guest clock delta is within tolerance: 90.614674ms
	I0625 16:29:25.303422   54199 start.go:83] releasing machines lock for "multinode-552402", held for 1m31.681017187s
	I0625 16:29:25.303446   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:29:25.303740   54199 main.go:141] libmachine: (multinode-552402) Calling .GetIP
	I0625 16:29:25.306119   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.306415   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:29:25.306442   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.306597   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:29:25.307128   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:29:25.307310   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:29:25.307380   54199 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 16:29:25.307430   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:29:25.307516   54199 ssh_runner.go:195] Run: cat /version.json
	I0625 16:29:25.307535   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:29:25.310079   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.310316   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.310437   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:29:25.310479   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.310623   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:29:25.310663   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:29:25.310692   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.310829   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:29:25.310847   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:29:25.311041   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:29:25.311083   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:29:25.311172   54199 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/multinode-552402/id_rsa Username:docker}
	I0625 16:29:25.311227   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:29:25.311371   54199 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/multinode-552402/id_rsa Username:docker}
	I0625 16:29:25.391251   54199 command_runner.go:130] > {"iso_version": "v1.33.1-1719245461-19128", "kicbase_version": "v0.0.44-1719002606-19116", "minikube_version": "v1.33.1", "commit": "a360798964ab8cf5f737423b2567c84f01731264"}
	I0625 16:29:25.418157   54199 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0625 16:29:25.418988   54199 ssh_runner.go:195] Run: systemctl --version
	I0625 16:29:25.425110   54199 command_runner.go:130] > systemd 252 (252)
	I0625 16:29:25.425141   54199 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0625 16:29:25.425199   54199 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 16:29:25.582338   54199 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0625 16:29:25.589540   54199 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0625 16:29:25.589834   54199 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 16:29:25.589902   54199 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 16:29:25.599160   54199 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0625 16:29:25.599180   54199 start.go:494] detecting cgroup driver to use...
	I0625 16:29:25.599238   54199 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 16:29:25.614989   54199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 16:29:25.629327   54199 docker.go:217] disabling cri-docker service (if available) ...
	I0625 16:29:25.629383   54199 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 16:29:25.642491   54199 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 16:29:25.655611   54199 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 16:29:25.809523   54199 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 16:29:25.945961   54199 docker.go:233] disabling docker service ...
	I0625 16:29:25.946034   54199 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 16:29:25.962962   54199 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 16:29:25.976994   54199 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 16:29:26.112301   54199 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 16:29:26.253486   54199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 16:29:26.267474   54199 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 16:29:26.285918   54199 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0625 16:29:26.285956   54199 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0625 16:29:26.286020   54199 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:29:26.297299   54199 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 16:29:26.297354   54199 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:29:26.308733   54199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:29:26.319454   54199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:29:26.330382   54199 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 16:29:26.341188   54199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:29:26.351915   54199 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:29:26.362406   54199 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:29:26.373022   54199 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 16:29:26.382529   54199 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0625 16:29:26.382577   54199 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 16:29:26.392650   54199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:29:26.531883   54199 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 16:29:28.582018   54199 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.050096893s)
	I0625 16:29:28.582062   54199 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 16:29:28.582103   54199 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 16:29:28.587157   54199 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0625 16:29:28.587185   54199 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0625 16:29:28.587197   54199 command_runner.go:130] > Device: 0,22	Inode: 1326        Links: 1
	I0625 16:29:28.587212   54199 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0625 16:29:28.587223   54199 command_runner.go:130] > Access: 2024-06-25 16:29:28.465516283 +0000
	I0625 16:29:28.587235   54199 command_runner.go:130] > Modify: 2024-06-25 16:29:28.465516283 +0000
	I0625 16:29:28.587247   54199 command_runner.go:130] > Change: 2024-06-25 16:29:28.465516283 +0000
	I0625 16:29:28.587256   54199 command_runner.go:130] >  Birth: -
	I0625 16:29:28.587280   54199 start.go:562] Will wait 60s for crictl version
	I0625 16:29:28.587321   54199 ssh_runner.go:195] Run: which crictl
	I0625 16:29:28.591243   54199 command_runner.go:130] > /usr/bin/crictl
	I0625 16:29:28.591299   54199 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 16:29:28.629070   54199 command_runner.go:130] > Version:  0.1.0
	I0625 16:29:28.629086   54199 command_runner.go:130] > RuntimeName:  cri-o
	I0625 16:29:28.629091   54199 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0625 16:29:28.629097   54199 command_runner.go:130] > RuntimeApiVersion:  v1
	I0625 16:29:28.630138   54199 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 16:29:28.630224   54199 ssh_runner.go:195] Run: crio --version
	I0625 16:29:28.657653   54199 command_runner.go:130] > crio version 1.29.1
	I0625 16:29:28.657675   54199 command_runner.go:130] > Version:        1.29.1
	I0625 16:29:28.657684   54199 command_runner.go:130] > GitCommit:      unknown
	I0625 16:29:28.657691   54199 command_runner.go:130] > GitCommitDate:  unknown
	I0625 16:29:28.657698   54199 command_runner.go:130] > GitTreeState:   clean
	I0625 16:29:28.657707   54199 command_runner.go:130] > BuildDate:      2024-06-24T21:45:48Z
	I0625 16:29:28.657714   54199 command_runner.go:130] > GoVersion:      go1.21.6
	I0625 16:29:28.657720   54199 command_runner.go:130] > Compiler:       gc
	I0625 16:29:28.657729   54199 command_runner.go:130] > Platform:       linux/amd64
	I0625 16:29:28.657736   54199 command_runner.go:130] > Linkmode:       dynamic
	I0625 16:29:28.657763   54199 command_runner.go:130] > BuildTags:      
	I0625 16:29:28.657775   54199 command_runner.go:130] >   containers_image_ostree_stub
	I0625 16:29:28.657782   54199 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0625 16:29:28.657789   54199 command_runner.go:130] >   btrfs_noversion
	I0625 16:29:28.657799   54199 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0625 16:29:28.657808   54199 command_runner.go:130] >   libdm_no_deferred_remove
	I0625 16:29:28.657817   54199 command_runner.go:130] >   seccomp
	I0625 16:29:28.657827   54199 command_runner.go:130] > LDFlags:          unknown
	I0625 16:29:28.657839   54199 command_runner.go:130] > SeccompEnabled:   true
	I0625 16:29:28.657845   54199 command_runner.go:130] > AppArmorEnabled:  false
	I0625 16:29:28.657918   54199 ssh_runner.go:195] Run: crio --version
	I0625 16:29:28.684341   54199 command_runner.go:130] > crio version 1.29.1
	I0625 16:29:28.684364   54199 command_runner.go:130] > Version:        1.29.1
	I0625 16:29:28.684369   54199 command_runner.go:130] > GitCommit:      unknown
	I0625 16:29:28.684373   54199 command_runner.go:130] > GitCommitDate:  unknown
	I0625 16:29:28.684378   54199 command_runner.go:130] > GitTreeState:   clean
	I0625 16:29:28.684383   54199 command_runner.go:130] > BuildDate:      2024-06-24T21:45:48Z
	I0625 16:29:28.684387   54199 command_runner.go:130] > GoVersion:      go1.21.6
	I0625 16:29:28.684391   54199 command_runner.go:130] > Compiler:       gc
	I0625 16:29:28.684395   54199 command_runner.go:130] > Platform:       linux/amd64
	I0625 16:29:28.684399   54199 command_runner.go:130] > Linkmode:       dynamic
	I0625 16:29:28.684403   54199 command_runner.go:130] > BuildTags:      
	I0625 16:29:28.684407   54199 command_runner.go:130] >   containers_image_ostree_stub
	I0625 16:29:28.684412   54199 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0625 16:29:28.684416   54199 command_runner.go:130] >   btrfs_noversion
	I0625 16:29:28.684420   54199 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0625 16:29:28.684424   54199 command_runner.go:130] >   libdm_no_deferred_remove
	I0625 16:29:28.684427   54199 command_runner.go:130] >   seccomp
	I0625 16:29:28.684431   54199 command_runner.go:130] > LDFlags:          unknown
	I0625 16:29:28.684435   54199 command_runner.go:130] > SeccompEnabled:   true
	I0625 16:29:28.684439   54199 command_runner.go:130] > AppArmorEnabled:  false
	I0625 16:29:28.687372   54199 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0625 16:29:28.688550   54199 main.go:141] libmachine: (multinode-552402) Calling .GetIP
	I0625 16:29:28.690939   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:28.691228   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:29:28.691259   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:28.691492   54199 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0625 16:29:28.695753   54199 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0625 16:29:28.695839   54199 kubeadm.go:877] updating cluster {Name:multinode-552402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-552402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.177 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0625 16:29:28.695994   54199 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 16:29:28.696037   54199 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:29:28.747797   54199 command_runner.go:130] > {
	I0625 16:29:28.747817   54199 command_runner.go:130] >   "images": [
	I0625 16:29:28.747822   54199 command_runner.go:130] >     {
	I0625 16:29:28.747843   54199 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0625 16:29:28.747848   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.747854   54199 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0625 16:29:28.747857   54199 command_runner.go:130] >       ],
	I0625 16:29:28.747861   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.747869   54199 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0625 16:29:28.747880   54199 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0625 16:29:28.747883   54199 command_runner.go:130] >       ],
	I0625 16:29:28.747888   54199 command_runner.go:130] >       "size": "65908273",
	I0625 16:29:28.747893   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.747897   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.747904   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.747908   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.747915   54199 command_runner.go:130] >     },
	I0625 16:29:28.747918   54199 command_runner.go:130] >     {
	I0625 16:29:28.747924   54199 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0625 16:29:28.747930   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.747993   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0625 16:29:28.748015   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748022   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748034   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0625 16:29:28.748047   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0625 16:29:28.748056   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748063   54199 command_runner.go:130] >       "size": "1363676",
	I0625 16:29:28.748072   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.748083   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748093   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748102   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748111   54199 command_runner.go:130] >     },
	I0625 16:29:28.748116   54199 command_runner.go:130] >     {
	I0625 16:29:28.748128   54199 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0625 16:29:28.748134   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748144   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0625 16:29:28.748153   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748160   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748175   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0625 16:29:28.748190   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0625 16:29:28.748199   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748208   54199 command_runner.go:130] >       "size": "31470524",
	I0625 16:29:28.748217   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.748223   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748230   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748239   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748245   54199 command_runner.go:130] >     },
	I0625 16:29:28.748253   54199 command_runner.go:130] >     {
	I0625 16:29:28.748263   54199 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0625 16:29:28.748277   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748291   54199 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0625 16:29:28.748300   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748305   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748314   54199 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0625 16:29:28.748327   54199 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0625 16:29:28.748334   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748338   54199 command_runner.go:130] >       "size": "61245718",
	I0625 16:29:28.748342   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.748346   54199 command_runner.go:130] >       "username": "nonroot",
	I0625 16:29:28.748350   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748356   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748361   54199 command_runner.go:130] >     },
	I0625 16:29:28.748365   54199 command_runner.go:130] >     {
	I0625 16:29:28.748373   54199 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0625 16:29:28.748379   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748384   54199 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0625 16:29:28.748390   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748393   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748400   54199 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0625 16:29:28.748410   54199 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0625 16:29:28.748415   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748422   54199 command_runner.go:130] >       "size": "150779692",
	I0625 16:29:28.748425   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.748432   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.748435   54199 command_runner.go:130] >       },
	I0625 16:29:28.748440   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748445   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748449   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748453   54199 command_runner.go:130] >     },
	I0625 16:29:28.748458   54199 command_runner.go:130] >     {
	I0625 16:29:28.748464   54199 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0625 16:29:28.748470   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748474   54199 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0625 16:29:28.748478   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748483   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748492   54199 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0625 16:29:28.748500   54199 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0625 16:29:28.748505   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748509   54199 command_runner.go:130] >       "size": "117609954",
	I0625 16:29:28.748516   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.748519   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.748522   54199 command_runner.go:130] >       },
	I0625 16:29:28.748526   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748530   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748534   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748538   54199 command_runner.go:130] >     },
	I0625 16:29:28.748541   54199 command_runner.go:130] >     {
	I0625 16:29:28.748546   54199 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0625 16:29:28.748553   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748558   54199 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0625 16:29:28.748563   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748567   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748575   54199 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0625 16:29:28.748585   54199 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0625 16:29:28.748591   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748595   54199 command_runner.go:130] >       "size": "112194888",
	I0625 16:29:28.748601   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.748605   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.748608   54199 command_runner.go:130] >       },
	I0625 16:29:28.748614   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748618   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748622   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748625   54199 command_runner.go:130] >     },
	I0625 16:29:28.748628   54199 command_runner.go:130] >     {
	I0625 16:29:28.748636   54199 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0625 16:29:28.748639   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748644   54199 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0625 16:29:28.748651   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748654   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748667   54199 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0625 16:29:28.748675   54199 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0625 16:29:28.748679   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748695   54199 command_runner.go:130] >       "size": "85953433",
	I0625 16:29:28.748702   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.748706   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748710   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748714   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748716   54199 command_runner.go:130] >     },
	I0625 16:29:28.748722   54199 command_runner.go:130] >     {
	I0625 16:29:28.748731   54199 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0625 16:29:28.748738   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748745   54199 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0625 16:29:28.748751   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748758   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748768   54199 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0625 16:29:28.748780   54199 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0625 16:29:28.748785   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748791   54199 command_runner.go:130] >       "size": "63051080",
	I0625 16:29:28.748797   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.748804   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.748809   54199 command_runner.go:130] >       },
	I0625 16:29:28.748815   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748820   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748829   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748835   54199 command_runner.go:130] >     },
	I0625 16:29:28.748844   54199 command_runner.go:130] >     {
	I0625 16:29:28.748851   54199 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0625 16:29:28.748858   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748862   54199 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0625 16:29:28.748867   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748871   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748880   54199 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0625 16:29:28.748887   54199 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0625 16:29:28.748893   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748897   54199 command_runner.go:130] >       "size": "750414",
	I0625 16:29:28.748900   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.748904   54199 command_runner.go:130] >         "value": "65535"
	I0625 16:29:28.748907   54199 command_runner.go:130] >       },
	I0625 16:29:28.748917   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748923   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748927   54199 command_runner.go:130] >       "pinned": true
	I0625 16:29:28.748931   54199 command_runner.go:130] >     }
	I0625 16:29:28.748934   54199 command_runner.go:130] >   ]
	I0625 16:29:28.748937   54199 command_runner.go:130] > }
	I0625 16:29:28.749105   54199 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 16:29:28.749117   54199 crio.go:433] Images already preloaded, skipping extraction
	I0625 16:29:28.749166   54199 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:29:28.781665   54199 command_runner.go:130] > {
	I0625 16:29:28.781689   54199 command_runner.go:130] >   "images": [
	I0625 16:29:28.781695   54199 command_runner.go:130] >     {
	I0625 16:29:28.781709   54199 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0625 16:29:28.781717   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.781729   54199 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0625 16:29:28.781734   54199 command_runner.go:130] >       ],
	I0625 16:29:28.781745   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.781759   54199 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0625 16:29:28.781774   54199 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0625 16:29:28.781781   54199 command_runner.go:130] >       ],
	I0625 16:29:28.781789   54199 command_runner.go:130] >       "size": "65908273",
	I0625 16:29:28.781799   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.781807   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.781822   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.781833   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.781839   54199 command_runner.go:130] >     },
	I0625 16:29:28.781848   54199 command_runner.go:130] >     {
	I0625 16:29:28.781859   54199 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0625 16:29:28.781870   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.781881   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0625 16:29:28.781890   54199 command_runner.go:130] >       ],
	I0625 16:29:28.781901   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.781916   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0625 16:29:28.781931   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0625 16:29:28.781940   54199 command_runner.go:130] >       ],
	I0625 16:29:28.781947   54199 command_runner.go:130] >       "size": "1363676",
	I0625 16:29:28.781956   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.781966   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.781982   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.781992   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.782001   54199 command_runner.go:130] >     },
	I0625 16:29:28.782009   54199 command_runner.go:130] >     {
	I0625 16:29:28.782020   54199 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0625 16:29:28.782030   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.782039   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0625 16:29:28.782048   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782055   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.782070   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0625 16:29:28.782084   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0625 16:29:28.782092   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782099   54199 command_runner.go:130] >       "size": "31470524",
	I0625 16:29:28.782108   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.782118   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.782125   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.782136   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.782142   54199 command_runner.go:130] >     },
	I0625 16:29:28.782150   54199 command_runner.go:130] >     {
	I0625 16:29:28.782161   54199 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0625 16:29:28.782172   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.782180   54199 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0625 16:29:28.782189   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782196   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.782211   54199 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0625 16:29:28.782234   54199 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0625 16:29:28.782243   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782250   54199 command_runner.go:130] >       "size": "61245718",
	I0625 16:29:28.782256   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.782263   54199 command_runner.go:130] >       "username": "nonroot",
	I0625 16:29:28.782271   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.782277   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.782285   54199 command_runner.go:130] >     },
	I0625 16:29:28.782290   54199 command_runner.go:130] >     {
	I0625 16:29:28.782301   54199 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0625 16:29:28.782308   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.782324   54199 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0625 16:29:28.782333   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782339   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.782362   54199 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0625 16:29:28.782376   54199 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0625 16:29:28.782381   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782387   54199 command_runner.go:130] >       "size": "150779692",
	I0625 16:29:28.782395   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.782401   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.782409   54199 command_runner.go:130] >       },
	I0625 16:29:28.782416   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.782425   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.782431   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.782440   54199 command_runner.go:130] >     },
	I0625 16:29:28.782444   54199 command_runner.go:130] >     {
	I0625 16:29:28.782455   54199 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0625 16:29:28.782464   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.782496   54199 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0625 16:29:28.782505   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782511   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.782525   54199 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0625 16:29:28.782540   54199 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0625 16:29:28.782549   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782555   54199 command_runner.go:130] >       "size": "117609954",
	I0625 16:29:28.782565   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.782572   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.782580   54199 command_runner.go:130] >       },
	I0625 16:29:28.782587   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.782596   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.782601   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.782610   54199 command_runner.go:130] >     },
	I0625 16:29:28.782616   54199 command_runner.go:130] >     {
	I0625 16:29:28.782628   54199 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0625 16:29:28.782650   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.782661   54199 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0625 16:29:28.782666   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782682   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.782698   54199 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0625 16:29:28.782715   54199 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0625 16:29:28.782724   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782730   54199 command_runner.go:130] >       "size": "112194888",
	I0625 16:29:28.782739   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.782746   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.782754   54199 command_runner.go:130] >       },
	I0625 16:29:28.782760   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.782766   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.782775   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.782781   54199 command_runner.go:130] >     },
	I0625 16:29:28.782789   54199 command_runner.go:130] >     {
	I0625 16:29:28.782798   54199 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0625 16:29:28.782806   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.782813   54199 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0625 16:29:28.782820   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782826   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.782862   54199 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0625 16:29:28.782879   54199 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0625 16:29:28.782884   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782890   54199 command_runner.go:130] >       "size": "85953433",
	I0625 16:29:28.782897   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.782904   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.782913   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.782918   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.782927   54199 command_runner.go:130] >     },
	I0625 16:29:28.782932   54199 command_runner.go:130] >     {
	I0625 16:29:28.782944   54199 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0625 16:29:28.782953   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.782964   54199 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0625 16:29:28.782973   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782980   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.782993   54199 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0625 16:29:28.783007   54199 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0625 16:29:28.783016   54199 command_runner.go:130] >       ],
	I0625 16:29:28.783031   54199 command_runner.go:130] >       "size": "63051080",
	I0625 16:29:28.783042   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.783047   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.783055   54199 command_runner.go:130] >       },
	I0625 16:29:28.783061   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.783070   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.783076   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.783081   54199 command_runner.go:130] >     },
	I0625 16:29:28.783090   54199 command_runner.go:130] >     {
	I0625 16:29:28.783101   54199 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0625 16:29:28.783109   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.783116   54199 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0625 16:29:28.783124   54199 command_runner.go:130] >       ],
	I0625 16:29:28.783130   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.783144   54199 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0625 16:29:28.783157   54199 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0625 16:29:28.783165   54199 command_runner.go:130] >       ],
	I0625 16:29:28.783169   54199 command_runner.go:130] >       "size": "750414",
	I0625 16:29:28.783173   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.783177   54199 command_runner.go:130] >         "value": "65535"
	I0625 16:29:28.783181   54199 command_runner.go:130] >       },
	I0625 16:29:28.783185   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.783189   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.783193   54199 command_runner.go:130] >       "pinned": true
	I0625 16:29:28.783196   54199 command_runner.go:130] >     }
	I0625 16:29:28.783200   54199 command_runner.go:130] >   ]
	I0625 16:29:28.783203   54199 command_runner.go:130] > }
	I0625 16:29:28.783539   54199 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 16:29:28.783561   54199 cache_images.go:84] Images are preloaded, skipping loading
	I0625 16:29:28.783570   54199 kubeadm.go:928] updating node { 192.168.39.231 8443 v1.30.2 crio true true} ...
	I0625 16:29:28.783678   54199 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-552402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-552402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 16:29:28.783753   54199 ssh_runner.go:195] Run: crio config
	I0625 16:29:28.815125   54199 command_runner.go:130] ! time="2024-06-25 16:29:28.795053671Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0625 16:29:28.821643   54199 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0625 16:29:28.827409   54199 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0625 16:29:28.827427   54199 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0625 16:29:28.827434   54199 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0625 16:29:28.827437   54199 command_runner.go:130] > #
	I0625 16:29:28.827451   54199 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0625 16:29:28.827460   54199 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0625 16:29:28.827472   54199 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0625 16:29:28.827487   54199 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0625 16:29:28.827497   54199 command_runner.go:130] > # reload'.
	I0625 16:29:28.827503   54199 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0625 16:29:28.827509   54199 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0625 16:29:28.827515   54199 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0625 16:29:28.827521   54199 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0625 16:29:28.827526   54199 command_runner.go:130] > [crio]
	I0625 16:29:28.827531   54199 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0625 16:29:28.827539   54199 command_runner.go:130] > # containers images, in this directory.
	I0625 16:29:28.827544   54199 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0625 16:29:28.827562   54199 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0625 16:29:28.827571   54199 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0625 16:29:28.827586   54199 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0625 16:29:28.827597   54199 command_runner.go:130] > # imagestore = ""
	I0625 16:29:28.827608   54199 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0625 16:29:28.827621   54199 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0625 16:29:28.827626   54199 command_runner.go:130] > storage_driver = "overlay"
	I0625 16:29:28.827631   54199 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0625 16:29:28.827638   54199 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0625 16:29:28.827644   54199 command_runner.go:130] > storage_option = [
	I0625 16:29:28.827652   54199 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0625 16:29:28.827658   54199 command_runner.go:130] > ]
	I0625 16:29:28.827674   54199 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0625 16:29:28.827687   54199 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0625 16:29:28.827698   54199 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0625 16:29:28.827709   54199 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0625 16:29:28.827715   54199 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0625 16:29:28.827719   54199 command_runner.go:130] > # always happen on a node reboot
	I0625 16:29:28.827725   54199 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0625 16:29:28.827737   54199 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0625 16:29:28.827747   54199 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0625 16:29:28.827759   54199 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0625 16:29:28.827767   54199 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0625 16:29:28.827782   54199 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0625 16:29:28.827797   54199 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0625 16:29:28.827804   54199 command_runner.go:130] > # internal_wipe = true
	I0625 16:29:28.827814   54199 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0625 16:29:28.827827   54199 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0625 16:29:28.827838   54199 command_runner.go:130] > # internal_repair = false
	I0625 16:29:28.827850   54199 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0625 16:29:28.827862   54199 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0625 16:29:28.827874   54199 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0625 16:29:28.827885   54199 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0625 16:29:28.827893   54199 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0625 16:29:28.827902   54199 command_runner.go:130] > [crio.api]
	I0625 16:29:28.827914   54199 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0625 16:29:28.827925   54199 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0625 16:29:28.827936   54199 command_runner.go:130] > # IP address on which the stream server will listen.
	I0625 16:29:28.827946   54199 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0625 16:29:28.827957   54199 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0625 16:29:28.827968   54199 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0625 16:29:28.827973   54199 command_runner.go:130] > # stream_port = "0"
	I0625 16:29:28.827981   54199 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0625 16:29:28.827987   54199 command_runner.go:130] > # stream_enable_tls = false
	I0625 16:29:28.828001   54199 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0625 16:29:28.828011   54199 command_runner.go:130] > # stream_idle_timeout = ""
	I0625 16:29:28.828024   54199 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0625 16:29:28.828037   54199 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0625 16:29:28.828043   54199 command_runner.go:130] > # minutes.
	I0625 16:29:28.828049   54199 command_runner.go:130] > # stream_tls_cert = ""
	I0625 16:29:28.828063   54199 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0625 16:29:28.828074   54199 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0625 16:29:28.828086   54199 command_runner.go:130] > # stream_tls_key = ""
	I0625 16:29:28.828099   54199 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0625 16:29:28.828111   54199 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0625 16:29:28.828130   54199 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0625 16:29:28.828140   54199 command_runner.go:130] > # stream_tls_ca = ""
	I0625 16:29:28.828151   54199 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0625 16:29:28.828161   54199 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0625 16:29:28.828177   54199 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0625 16:29:28.828187   54199 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0625 16:29:28.828201   54199 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0625 16:29:28.828213   54199 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0625 16:29:28.828222   54199 command_runner.go:130] > [crio.runtime]
	I0625 16:29:28.828233   54199 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0625 16:29:28.828264   54199 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0625 16:29:28.828277   54199 command_runner.go:130] > # "nofile=1024:2048"
	I0625 16:29:28.828290   54199 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0625 16:29:28.828300   54199 command_runner.go:130] > # default_ulimits = [
	I0625 16:29:28.828306   54199 command_runner.go:130] > # ]
	I0625 16:29:28.828319   54199 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0625 16:29:28.828326   54199 command_runner.go:130] > # no_pivot = false
	I0625 16:29:28.828334   54199 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0625 16:29:28.828348   54199 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0625 16:29:28.828365   54199 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0625 16:29:28.828377   54199 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0625 16:29:28.828389   54199 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0625 16:29:28.828403   54199 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0625 16:29:28.828411   54199 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0625 16:29:28.828416   54199 command_runner.go:130] > # Cgroup setting for conmon
	I0625 16:29:28.828431   54199 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0625 16:29:28.828441   54199 command_runner.go:130] > conmon_cgroup = "pod"
	I0625 16:29:28.828454   54199 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0625 16:29:28.828465   54199 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0625 16:29:28.828479   54199 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0625 16:29:28.828488   54199 command_runner.go:130] > conmon_env = [
	I0625 16:29:28.828498   54199 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0625 16:29:28.828504   54199 command_runner.go:130] > ]
	I0625 16:29:28.828512   54199 command_runner.go:130] > # Additional environment variables to set for all the
	I0625 16:29:28.828524   54199 command_runner.go:130] > # containers. These are overridden if set in the
	I0625 16:29:28.828537   54199 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0625 16:29:28.828547   54199 command_runner.go:130] > # default_env = [
	I0625 16:29:28.828557   54199 command_runner.go:130] > # ]
	I0625 16:29:28.828566   54199 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0625 16:29:28.828579   54199 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0625 16:29:28.828585   54199 command_runner.go:130] > # selinux = false
	I0625 16:29:28.828594   54199 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0625 16:29:28.828609   54199 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0625 16:29:28.828622   54199 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0625 16:29:28.828631   54199 command_runner.go:130] > # seccomp_profile = ""
	I0625 16:29:28.828644   54199 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0625 16:29:28.828656   54199 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0625 16:29:28.828667   54199 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0625 16:29:28.828674   54199 command_runner.go:130] > # which might increase security.
	I0625 16:29:28.828682   54199 command_runner.go:130] > # This option is currently deprecated,
	I0625 16:29:28.828696   54199 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0625 16:29:28.828707   54199 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0625 16:29:28.828722   54199 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0625 16:29:28.828735   54199 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0625 16:29:28.828749   54199 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0625 16:29:28.828758   54199 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0625 16:29:28.828769   54199 command_runner.go:130] > # This option supports live configuration reload.
	I0625 16:29:28.828779   54199 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0625 16:29:28.828793   54199 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0625 16:29:28.828804   54199 command_runner.go:130] > # the cgroup blockio controller.
	I0625 16:29:28.828815   54199 command_runner.go:130] > # blockio_config_file = ""
	I0625 16:29:28.828828   54199 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0625 16:29:28.828837   54199 command_runner.go:130] > # blockio parameters.
	I0625 16:29:28.828841   54199 command_runner.go:130] > # blockio_reload = false
	I0625 16:29:28.828852   54199 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0625 16:29:28.828862   54199 command_runner.go:130] > # irqbalance daemon.
	I0625 16:29:28.828874   54199 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0625 16:29:28.828888   54199 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0625 16:29:28.828906   54199 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0625 16:29:28.828920   54199 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0625 16:29:28.828930   54199 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0625 16:29:28.828938   54199 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0625 16:29:28.828950   54199 command_runner.go:130] > # This option supports live configuration reload.
	I0625 16:29:28.828962   54199 command_runner.go:130] > # rdt_config_file = ""
	I0625 16:29:28.828972   54199 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0625 16:29:28.828983   54199 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0625 16:29:28.829006   54199 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0625 16:29:28.829014   54199 command_runner.go:130] > # separate_pull_cgroup = ""
	I0625 16:29:28.829021   54199 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0625 16:29:28.829033   54199 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0625 16:29:28.829043   54199 command_runner.go:130] > # will be added.
	I0625 16:29:28.829050   54199 command_runner.go:130] > # default_capabilities = [
	I0625 16:29:28.829059   54199 command_runner.go:130] > # 	"CHOWN",
	I0625 16:29:28.829069   54199 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0625 16:29:28.829078   54199 command_runner.go:130] > # 	"FSETID",
	I0625 16:29:28.829088   54199 command_runner.go:130] > # 	"FOWNER",
	I0625 16:29:28.829096   54199 command_runner.go:130] > # 	"SETGID",
	I0625 16:29:28.829104   54199 command_runner.go:130] > # 	"SETUID",
	I0625 16:29:28.829109   54199 command_runner.go:130] > # 	"SETPCAP",
	I0625 16:29:28.829120   54199 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0625 16:29:28.829129   54199 command_runner.go:130] > # 	"KILL",
	I0625 16:29:28.829138   54199 command_runner.go:130] > # ]
	I0625 16:29:28.829153   54199 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0625 16:29:28.829166   54199 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0625 16:29:28.829177   54199 command_runner.go:130] > # add_inheritable_capabilities = false
	I0625 16:29:28.829187   54199 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0625 16:29:28.829197   54199 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0625 16:29:28.829207   54199 command_runner.go:130] > default_sysctls = [
	I0625 16:29:28.829219   54199 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0625 16:29:28.829227   54199 command_runner.go:130] > ]
	I0625 16:29:28.829238   54199 command_runner.go:130] > # List of devices on the host that a
	I0625 16:29:28.829251   54199 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0625 16:29:28.829261   54199 command_runner.go:130] > # allowed_devices = [
	I0625 16:29:28.829267   54199 command_runner.go:130] > # 	"/dev/fuse",
	I0625 16:29:28.829273   54199 command_runner.go:130] > # ]
	I0625 16:29:28.829278   54199 command_runner.go:130] > # List of additional devices. specified as
	I0625 16:29:28.829294   54199 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0625 16:29:28.829306   54199 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0625 16:29:28.829318   54199 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0625 16:29:28.829329   54199 command_runner.go:130] > # additional_devices = [
	I0625 16:29:28.829337   54199 command_runner.go:130] > # ]
	I0625 16:29:28.829354   54199 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0625 16:29:28.829362   54199 command_runner.go:130] > # cdi_spec_dirs = [
	I0625 16:29:28.829366   54199 command_runner.go:130] > # 	"/etc/cdi",
	I0625 16:29:28.829376   54199 command_runner.go:130] > # 	"/var/run/cdi",
	I0625 16:29:28.829385   54199 command_runner.go:130] > # ]
	I0625 16:29:28.829398   54199 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0625 16:29:28.829411   54199 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0625 16:29:28.829420   54199 command_runner.go:130] > # Defaults to false.
	I0625 16:29:28.829432   54199 command_runner.go:130] > # device_ownership_from_security_context = false
	I0625 16:29:28.829444   54199 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0625 16:29:28.829452   54199 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0625 16:29:28.829458   54199 command_runner.go:130] > # hooks_dir = [
	I0625 16:29:28.829470   54199 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0625 16:29:28.829478   54199 command_runner.go:130] > # ]
	I0625 16:29:28.829490   54199 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0625 16:29:28.829503   54199 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0625 16:29:28.829514   54199 command_runner.go:130] > # its default mounts from the following two files:
	I0625 16:29:28.829522   54199 command_runner.go:130] > #
	I0625 16:29:28.829530   54199 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0625 16:29:28.829540   54199 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0625 16:29:28.829553   54199 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0625 16:29:28.829561   54199 command_runner.go:130] > #
	I0625 16:29:28.829574   54199 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0625 16:29:28.829588   54199 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0625 16:29:28.829601   54199 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0625 16:29:28.829612   54199 command_runner.go:130] > #      only add mounts it finds in this file.
	I0625 16:29:28.829618   54199 command_runner.go:130] > #
	I0625 16:29:28.829622   54199 command_runner.go:130] > # default_mounts_file = ""
	I0625 16:29:28.829634   54199 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0625 16:29:28.829648   54199 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0625 16:29:28.829657   54199 command_runner.go:130] > pids_limit = 1024
	I0625 16:29:28.829670   54199 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0625 16:29:28.829683   54199 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0625 16:29:28.829695   54199 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0625 16:29:28.829707   54199 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0625 16:29:28.829716   54199 command_runner.go:130] > # log_size_max = -1
	I0625 16:29:28.829731   54199 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0625 16:29:28.829741   54199 command_runner.go:130] > # log_to_journald = false
	I0625 16:29:28.829754   54199 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0625 16:29:28.829764   54199 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0625 16:29:28.829776   54199 command_runner.go:130] > # Path to directory for container attach sockets.
	I0625 16:29:28.829787   54199 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0625 16:29:28.829795   54199 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0625 16:29:28.829800   54199 command_runner.go:130] > # bind_mount_prefix = ""
	I0625 16:29:28.829813   54199 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0625 16:29:28.829823   54199 command_runner.go:130] > # read_only = false
	I0625 16:29:28.829836   54199 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0625 16:29:28.829848   54199 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0625 16:29:28.829858   54199 command_runner.go:130] > # live configuration reload.
	I0625 16:29:28.829869   54199 command_runner.go:130] > # log_level = "info"
	I0625 16:29:28.829879   54199 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0625 16:29:28.829887   54199 command_runner.go:130] > # This option supports live configuration reload.
	I0625 16:29:28.829896   54199 command_runner.go:130] > # log_filter = ""
	I0625 16:29:28.829910   54199 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0625 16:29:28.829925   54199 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0625 16:29:28.829935   54199 command_runner.go:130] > # separated by comma.
	I0625 16:29:28.829951   54199 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0625 16:29:28.829960   54199 command_runner.go:130] > # uid_mappings = ""
	I0625 16:29:28.829969   54199 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0625 16:29:28.829982   54199 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0625 16:29:28.829993   54199 command_runner.go:130] > # separated by comma.
	I0625 16:29:28.830008   54199 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0625 16:29:28.830018   54199 command_runner.go:130] > # gid_mappings = ""
	I0625 16:29:28.830027   54199 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0625 16:29:28.830040   54199 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0625 16:29:28.830050   54199 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0625 16:29:28.830064   54199 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0625 16:29:28.830075   54199 command_runner.go:130] > # minimum_mappable_uid = -1
	I0625 16:29:28.830089   54199 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0625 16:29:28.830101   54199 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0625 16:29:28.830115   54199 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0625 16:29:28.830130   54199 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0625 16:29:28.830138   54199 command_runner.go:130] > # minimum_mappable_gid = -1
	I0625 16:29:28.830145   54199 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0625 16:29:28.830158   54199 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0625 16:29:28.830172   54199 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0625 16:29:28.830182   54199 command_runner.go:130] > # ctr_stop_timeout = 30
	I0625 16:29:28.830191   54199 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0625 16:29:28.830204   54199 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0625 16:29:28.830215   54199 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0625 16:29:28.830223   54199 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0625 16:29:28.830228   54199 command_runner.go:130] > drop_infra_ctr = false
	I0625 16:29:28.830240   54199 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0625 16:29:28.830253   54199 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0625 16:29:28.830268   54199 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0625 16:29:28.830278   54199 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0625 16:29:28.830292   54199 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0625 16:29:28.830304   54199 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0625 16:29:28.830313   54199 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0625 16:29:28.830323   54199 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0625 16:29:28.830334   54199 command_runner.go:130] > # shared_cpuset = ""
	I0625 16:29:28.830347   54199 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0625 16:29:28.830363   54199 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0625 16:29:28.830372   54199 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0625 16:29:28.830386   54199 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0625 16:29:28.830395   54199 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0625 16:29:28.830400   54199 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0625 16:29:28.830413   54199 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0625 16:29:28.830423   54199 command_runner.go:130] > # enable_criu_support = false
	I0625 16:29:28.830435   54199 command_runner.go:130] > # Enable/disable the generation of the container,
	I0625 16:29:28.830448   54199 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0625 16:29:28.830458   54199 command_runner.go:130] > # enable_pod_events = false
	I0625 16:29:28.830482   54199 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0625 16:29:28.830496   54199 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0625 16:29:28.830507   54199 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0625 16:29:28.830517   54199 command_runner.go:130] > # default_runtime = "runc"
	I0625 16:29:28.830529   54199 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0625 16:29:28.830542   54199 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0625 16:29:28.830558   54199 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0625 16:29:28.830570   54199 command_runner.go:130] > # creation as a file is not desired either.
	I0625 16:29:28.830584   54199 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0625 16:29:28.830596   54199 command_runner.go:130] > # the hostname is being managed dynamically.
	I0625 16:29:28.830606   54199 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0625 16:29:28.830615   54199 command_runner.go:130] > # ]
	I0625 16:29:28.830626   54199 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0625 16:29:28.830636   54199 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0625 16:29:28.830650   54199 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0625 16:29:28.830662   54199 command_runner.go:130] > # Each entry in the table should follow the format:
	I0625 16:29:28.830671   54199 command_runner.go:130] > #
	I0625 16:29:28.830678   54199 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0625 16:29:28.830689   54199 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0625 16:29:28.830713   54199 command_runner.go:130] > # runtime_type = "oci"
	I0625 16:29:28.830721   54199 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0625 16:29:28.830734   54199 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0625 16:29:28.830746   54199 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0625 16:29:28.830757   54199 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0625 16:29:28.830767   54199 command_runner.go:130] > # monitor_env = []
	I0625 16:29:28.830778   54199 command_runner.go:130] > # privileged_without_host_devices = false
	I0625 16:29:28.830788   54199 command_runner.go:130] > # allowed_annotations = []
	I0625 16:29:28.830798   54199 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0625 16:29:28.830804   54199 command_runner.go:130] > # Where:
	I0625 16:29:28.830812   54199 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0625 16:29:28.830827   54199 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0625 16:29:28.830840   54199 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0625 16:29:28.830853   54199 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0625 16:29:28.830862   54199 command_runner.go:130] > #   in $PATH.
	I0625 16:29:28.830875   54199 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0625 16:29:28.830884   54199 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0625 16:29:28.830893   54199 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0625 16:29:28.830902   54199 command_runner.go:130] > #   state.
	I0625 16:29:28.830916   54199 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0625 16:29:28.830929   54199 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0625 16:29:28.830943   54199 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0625 16:29:28.830955   54199 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0625 16:29:28.830968   54199 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0625 16:29:28.830978   54199 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0625 16:29:28.830988   54199 command_runner.go:130] > #   The currently recognized values are:
	I0625 16:29:28.831003   54199 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0625 16:29:28.831018   54199 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0625 16:29:28.831030   54199 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0625 16:29:28.831043   54199 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0625 16:29:28.831056   54199 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0625 16:29:28.831067   54199 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0625 16:29:28.831081   54199 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0625 16:29:28.831095   54199 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0625 16:29:28.831108   54199 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0625 16:29:28.831121   54199 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0625 16:29:28.831131   54199 command_runner.go:130] > #   deprecated option "conmon".
	I0625 16:29:28.831142   54199 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0625 16:29:28.831148   54199 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0625 16:29:28.831159   54199 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0625 16:29:28.831172   54199 command_runner.go:130] > #   should be moved to the container's cgroup
	I0625 16:29:28.831187   54199 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0625 16:29:28.831197   54199 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0625 16:29:28.831211   54199 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0625 16:29:28.831222   54199 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0625 16:29:28.831229   54199 command_runner.go:130] > #
	I0625 16:29:28.831234   54199 command_runner.go:130] > # Using the seccomp notifier feature:
	I0625 16:29:28.831242   54199 command_runner.go:130] > #
	I0625 16:29:28.831255   54199 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0625 16:29:28.831269   54199 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0625 16:29:28.831277   54199 command_runner.go:130] > #
	I0625 16:29:28.831287   54199 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0625 16:29:28.831300   54199 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0625 16:29:28.831308   54199 command_runner.go:130] > #
	I0625 16:29:28.831317   54199 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0625 16:29:28.831325   54199 command_runner.go:130] > # feature.
	I0625 16:29:28.831330   54199 command_runner.go:130] > #
	I0625 16:29:28.831344   54199 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0625 16:29:28.831362   54199 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0625 16:29:28.831374   54199 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0625 16:29:28.831387   54199 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0625 16:29:28.831400   54199 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0625 16:29:28.831406   54199 command_runner.go:130] > #
	I0625 16:29:28.831414   54199 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0625 16:29:28.831428   54199 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0625 16:29:28.831437   54199 command_runner.go:130] > #
	I0625 16:29:28.831450   54199 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0625 16:29:28.831462   54199 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0625 16:29:28.831470   54199 command_runner.go:130] > #
	I0625 16:29:28.831483   54199 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0625 16:29:28.831491   54199 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0625 16:29:28.831497   54199 command_runner.go:130] > # limitation.
	I0625 16:29:28.831508   54199 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0625 16:29:28.831520   54199 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0625 16:29:28.831529   54199 command_runner.go:130] > runtime_type = "oci"
	I0625 16:29:28.831539   54199 command_runner.go:130] > runtime_root = "/run/runc"
	I0625 16:29:28.831548   54199 command_runner.go:130] > runtime_config_path = ""
	I0625 16:29:28.831556   54199 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0625 16:29:28.831565   54199 command_runner.go:130] > monitor_cgroup = "pod"
	I0625 16:29:28.831573   54199 command_runner.go:130] > monitor_exec_cgroup = ""
	I0625 16:29:28.831580   54199 command_runner.go:130] > monitor_env = [
	I0625 16:29:28.831589   54199 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0625 16:29:28.831598   54199 command_runner.go:130] > ]
	I0625 16:29:28.831608   54199 command_runner.go:130] > privileged_without_host_devices = false
	I0625 16:29:28.831621   54199 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0625 16:29:28.831633   54199 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0625 16:29:28.831646   54199 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0625 16:29:28.831659   54199 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0625 16:29:28.831672   54199 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0625 16:29:28.831686   54199 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0625 16:29:28.831704   54199 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0625 16:29:28.831719   54199 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0625 16:29:28.831731   54199 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0625 16:29:28.831745   54199 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0625 16:29:28.831751   54199 command_runner.go:130] > # Example:
	I0625 16:29:28.831756   54199 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0625 16:29:28.831763   54199 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0625 16:29:28.831768   54199 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0625 16:29:28.831777   54199 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0625 16:29:28.831786   54199 command_runner.go:130] > # cpuset = 0
	I0625 16:29:28.831796   54199 command_runner.go:130] > # cpushares = "0-1"
	I0625 16:29:28.831805   54199 command_runner.go:130] > # Where:
	I0625 16:29:28.831815   54199 command_runner.go:130] > # The workload name is workload-type.
	I0625 16:29:28.831830   54199 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0625 16:29:28.831842   54199 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0625 16:29:28.831852   54199 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0625 16:29:28.831861   54199 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0625 16:29:28.831869   54199 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0625 16:29:28.831874   54199 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0625 16:29:28.831882   54199 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0625 16:29:28.831888   54199 command_runner.go:130] > # Default value is set to true
	I0625 16:29:28.831893   54199 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0625 16:29:28.831900   54199 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0625 16:29:28.831905   54199 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0625 16:29:28.831912   54199 command_runner.go:130] > # Default value is set to 'false'
	I0625 16:29:28.831916   54199 command_runner.go:130] > # disable_hostport_mapping = false
	I0625 16:29:28.831925   54199 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0625 16:29:28.831929   54199 command_runner.go:130] > #
	I0625 16:29:28.831939   54199 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0625 16:29:28.831949   54199 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0625 16:29:28.831959   54199 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0625 16:29:28.831969   54199 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0625 16:29:28.831978   54199 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0625 16:29:28.831983   54199 command_runner.go:130] > [crio.image]
	I0625 16:29:28.831993   54199 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0625 16:29:28.831998   54199 command_runner.go:130] > # default_transport = "docker://"
	I0625 16:29:28.832003   54199 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0625 16:29:28.832009   54199 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0625 16:29:28.832013   54199 command_runner.go:130] > # global_auth_file = ""
	I0625 16:29:28.832018   54199 command_runner.go:130] > # The image used to instantiate infra containers.
	I0625 16:29:28.832022   54199 command_runner.go:130] > # This option supports live configuration reload.
	I0625 16:29:28.832027   54199 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0625 16:29:28.832033   54199 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0625 16:29:28.832038   54199 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0625 16:29:28.832043   54199 command_runner.go:130] > # This option supports live configuration reload.
	I0625 16:29:28.832047   54199 command_runner.go:130] > # pause_image_auth_file = ""
	I0625 16:29:28.832052   54199 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0625 16:29:28.832057   54199 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0625 16:29:28.832063   54199 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0625 16:29:28.832068   54199 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0625 16:29:28.832072   54199 command_runner.go:130] > # pause_command = "/pause"
	I0625 16:29:28.832077   54199 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0625 16:29:28.832082   54199 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0625 16:29:28.832087   54199 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0625 16:29:28.832094   54199 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0625 16:29:28.832105   54199 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0625 16:29:28.832111   54199 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0625 16:29:28.832117   54199 command_runner.go:130] > # pinned_images = [
	I0625 16:29:28.832120   54199 command_runner.go:130] > # ]
	I0625 16:29:28.832127   54199 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0625 16:29:28.832135   54199 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0625 16:29:28.832142   54199 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0625 16:29:28.832153   54199 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0625 16:29:28.832166   54199 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0625 16:29:28.832174   54199 command_runner.go:130] > # signature_policy = ""
	I0625 16:29:28.832182   54199 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0625 16:29:28.832188   54199 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0625 16:29:28.832197   54199 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0625 16:29:28.832206   54199 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0625 16:29:28.832214   54199 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0625 16:29:28.832218   54199 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0625 16:29:28.832226   54199 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0625 16:29:28.832236   54199 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0625 16:29:28.832242   54199 command_runner.go:130] > # changing them here.
	I0625 16:29:28.832246   54199 command_runner.go:130] > # insecure_registries = [
	I0625 16:29:28.832252   54199 command_runner.go:130] > # ]
	I0625 16:29:28.832258   54199 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0625 16:29:28.832265   54199 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0625 16:29:28.832269   54199 command_runner.go:130] > # image_volumes = "mkdir"
	I0625 16:29:28.832278   54199 command_runner.go:130] > # Temporary directory to use for storing big files
	I0625 16:29:28.832282   54199 command_runner.go:130] > # big_files_temporary_dir = ""
	I0625 16:29:28.832288   54199 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0625 16:29:28.832294   54199 command_runner.go:130] > # CNI plugins.
	I0625 16:29:28.832298   54199 command_runner.go:130] > [crio.network]
	I0625 16:29:28.832305   54199 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0625 16:29:28.832313   54199 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0625 16:29:28.832317   54199 command_runner.go:130] > # cni_default_network = ""
	I0625 16:29:28.832325   54199 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0625 16:29:28.832330   54199 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0625 16:29:28.832336   54199 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0625 16:29:28.832342   54199 command_runner.go:130] > # plugin_dirs = [
	I0625 16:29:28.832346   54199 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0625 16:29:28.832355   54199 command_runner.go:130] > # ]
	I0625 16:29:28.832361   54199 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0625 16:29:28.832367   54199 command_runner.go:130] > [crio.metrics]
	I0625 16:29:28.832372   54199 command_runner.go:130] > # Globally enable or disable metrics support.
	I0625 16:29:28.832378   54199 command_runner.go:130] > enable_metrics = true
	I0625 16:29:28.832383   54199 command_runner.go:130] > # Specify enabled metrics collectors.
	I0625 16:29:28.832390   54199 command_runner.go:130] > # Per default all metrics are enabled.
	I0625 16:29:28.832396   54199 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0625 16:29:28.832404   54199 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0625 16:29:28.832412   54199 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0625 16:29:28.832419   54199 command_runner.go:130] > # metrics_collectors = [
	I0625 16:29:28.832422   54199 command_runner.go:130] > # 	"operations",
	I0625 16:29:28.832430   54199 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0625 16:29:28.832434   54199 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0625 16:29:28.832438   54199 command_runner.go:130] > # 	"operations_errors",
	I0625 16:29:28.832443   54199 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0625 16:29:28.832449   54199 command_runner.go:130] > # 	"image_pulls_by_name",
	I0625 16:29:28.832454   54199 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0625 16:29:28.832461   54199 command_runner.go:130] > # 	"image_pulls_failures",
	I0625 16:29:28.832465   54199 command_runner.go:130] > # 	"image_pulls_successes",
	I0625 16:29:28.832472   54199 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0625 16:29:28.832476   54199 command_runner.go:130] > # 	"image_layer_reuse",
	I0625 16:29:28.832482   54199 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0625 16:29:28.832487   54199 command_runner.go:130] > # 	"containers_oom_total",
	I0625 16:29:28.832493   54199 command_runner.go:130] > # 	"containers_oom",
	I0625 16:29:28.832496   54199 command_runner.go:130] > # 	"processes_defunct",
	I0625 16:29:28.832502   54199 command_runner.go:130] > # 	"operations_total",
	I0625 16:29:28.832507   54199 command_runner.go:130] > # 	"operations_latency_seconds",
	I0625 16:29:28.832514   54199 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0625 16:29:28.832518   54199 command_runner.go:130] > # 	"operations_errors_total",
	I0625 16:29:28.832524   54199 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0625 16:29:28.832528   54199 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0625 16:29:28.832533   54199 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0625 16:29:28.832543   54199 command_runner.go:130] > # 	"image_pulls_success_total",
	I0625 16:29:28.832554   54199 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0625 16:29:28.832563   54199 command_runner.go:130] > # 	"containers_oom_count_total",
	I0625 16:29:28.832570   54199 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0625 16:29:28.832575   54199 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0625 16:29:28.832578   54199 command_runner.go:130] > # ]
	I0625 16:29:28.832583   54199 command_runner.go:130] > # The port on which the metrics server will listen.
	I0625 16:29:28.832589   54199 command_runner.go:130] > # metrics_port = 9090
	I0625 16:29:28.832594   54199 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0625 16:29:28.832600   54199 command_runner.go:130] > # metrics_socket = ""
	I0625 16:29:28.832605   54199 command_runner.go:130] > # The certificate for the secure metrics server.
	I0625 16:29:28.832613   54199 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0625 16:29:28.832619   54199 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0625 16:29:28.832625   54199 command_runner.go:130] > # certificate on any modification event.
	I0625 16:29:28.832629   54199 command_runner.go:130] > # metrics_cert = ""
	I0625 16:29:28.832637   54199 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0625 16:29:28.832641   54199 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0625 16:29:28.832647   54199 command_runner.go:130] > # metrics_key = ""
	I0625 16:29:28.832653   54199 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0625 16:29:28.832659   54199 command_runner.go:130] > [crio.tracing]
	I0625 16:29:28.832665   54199 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0625 16:29:28.832670   54199 command_runner.go:130] > # enable_tracing = false
	I0625 16:29:28.832677   54199 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0625 16:29:28.832683   54199 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0625 16:29:28.832690   54199 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0625 16:29:28.832697   54199 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0625 16:29:28.832701   54199 command_runner.go:130] > # CRI-O NRI configuration.
	I0625 16:29:28.832707   54199 command_runner.go:130] > [crio.nri]
	I0625 16:29:28.832711   54199 command_runner.go:130] > # Globally enable or disable NRI.
	I0625 16:29:28.832717   54199 command_runner.go:130] > # enable_nri = false
	I0625 16:29:28.832721   54199 command_runner.go:130] > # NRI socket to listen on.
	I0625 16:29:28.832728   54199 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0625 16:29:28.832732   54199 command_runner.go:130] > # NRI plugin directory to use.
	I0625 16:29:28.832739   54199 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0625 16:29:28.832744   54199 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0625 16:29:28.832751   54199 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0625 16:29:28.832756   54199 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0625 16:29:28.832763   54199 command_runner.go:130] > # nri_disable_connections = false
	I0625 16:29:28.832768   54199 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0625 16:29:28.832775   54199 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0625 16:29:28.832780   54199 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0625 16:29:28.832787   54199 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0625 16:29:28.832793   54199 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0625 16:29:28.832798   54199 command_runner.go:130] > [crio.stats]
	I0625 16:29:28.832803   54199 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0625 16:29:28.832811   54199 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0625 16:29:28.832815   54199 command_runner.go:130] > # stats_collection_period = 0
	I0625 16:29:28.832922   54199 cni.go:84] Creating CNI manager for ""
	I0625 16:29:28.832932   54199 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0625 16:29:28.832939   54199 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0625 16:29:28.832958   54199 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-552402 NodeName:multinode-552402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0625 16:29:28.833084   54199 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-552402"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0625 16:29:28.833141   54199 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0625 16:29:28.844146   54199 command_runner.go:130] > kubeadm
	I0625 16:29:28.844160   54199 command_runner.go:130] > kubectl
	I0625 16:29:28.844164   54199 command_runner.go:130] > kubelet
	I0625 16:29:28.844214   54199 binaries.go:44] Found k8s binaries, skipping transfer
	I0625 16:29:28.844258   54199 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0625 16:29:28.854259   54199 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0625 16:29:28.870641   54199 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 16:29:28.886494   54199 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0625 16:29:28.902655   54199 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I0625 16:29:28.906953   54199 command_runner.go:130] > 192.168.39.231	control-plane.minikube.internal
	I0625 16:29:28.907032   54199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:29:29.044061   54199 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 16:29:29.058261   54199 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402 for IP: 192.168.39.231
	I0625 16:29:29.058276   54199 certs.go:194] generating shared ca certs ...
	I0625 16:29:29.058296   54199 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:29:29.058446   54199 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 16:29:29.058505   54199 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 16:29:29.058516   54199 certs.go:256] generating profile certs ...
	I0625 16:29:29.058592   54199 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/client.key
	I0625 16:29:29.058647   54199 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/apiserver.key.0cdd1bbb
	I0625 16:29:29.058688   54199 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/proxy-client.key
	I0625 16:29:29.058698   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0625 16:29:29.058709   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0625 16:29:29.058722   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0625 16:29:29.058732   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0625 16:29:29.058741   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0625 16:29:29.058752   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0625 16:29:29.058764   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0625 16:29:29.058772   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0625 16:29:29.058822   54199 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem (1338 bytes)
	W0625 16:29:29.058847   54199 certs.go:480] ignoring /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239_empty.pem, impossibly tiny 0 bytes
	I0625 16:29:29.058858   54199 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 16:29:29.058879   54199 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 16:29:29.058901   54199 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 16:29:29.058921   54199 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 16:29:29.058996   54199 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:29:29.059027   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:29:29.059040   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem -> /usr/share/ca-certificates/21239.pem
	I0625 16:29:29.059049   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /usr/share/ca-certificates/212392.pem
	I0625 16:29:29.059571   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 16:29:29.083401   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 16:29:29.106823   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 16:29:29.132199   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 16:29:29.155401   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0625 16:29:29.178490   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0625 16:29:29.201308   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 16:29:29.225004   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0625 16:29:29.248552   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 16:29:29.271690   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem --> /usr/share/ca-certificates/21239.pem (1338 bytes)
	I0625 16:29:29.295084   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /usr/share/ca-certificates/212392.pem (1708 bytes)
	I0625 16:29:29.319053   54199 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0625 16:29:29.335076   54199 ssh_runner.go:195] Run: openssl version
	I0625 16:29:29.340692   54199 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0625 16:29:29.340759   54199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21239.pem && ln -fs /usr/share/ca-certificates/21239.pem /etc/ssl/certs/21239.pem"
	I0625 16:29:29.351365   54199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21239.pem
	I0625 16:29:29.355719   54199 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 16:29:29.355782   54199 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 16:29:29.355830   54199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21239.pem
	I0625 16:29:29.361364   54199 command_runner.go:130] > 51391683
	I0625 16:29:29.361403   54199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21239.pem /etc/ssl/certs/51391683.0"
	I0625 16:29:29.370379   54199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212392.pem && ln -fs /usr/share/ca-certificates/212392.pem /etc/ssl/certs/212392.pem"
	I0625 16:29:29.380666   54199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212392.pem
	I0625 16:29:29.385081   54199 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 16:29:29.385107   54199 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 16:29:29.385131   54199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212392.pem
	I0625 16:29:29.390645   54199 command_runner.go:130] > 3ec20f2e
	I0625 16:29:29.390686   54199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/212392.pem /etc/ssl/certs/3ec20f2e.0"
	I0625 16:29:29.399808   54199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 16:29:29.409958   54199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:29:29.414415   54199 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:29:29.414604   54199 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:29:29.414637   54199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:29:29.419943   54199 command_runner.go:130] > b5213941
	I0625 16:29:29.420141   54199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 16:29:29.429026   54199 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 16:29:29.433375   54199 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 16:29:29.433395   54199 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0625 16:29:29.433402   54199 command_runner.go:130] > Device: 253,1	Inode: 1057301     Links: 1
	I0625 16:29:29.433412   54199 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0625 16:29:29.433421   54199 command_runner.go:130] > Access: 2024-06-25 16:23:19.759396760 +0000
	I0625 16:29:29.433433   54199 command_runner.go:130] > Modify: 2024-06-25 16:23:19.759396760 +0000
	I0625 16:29:29.433447   54199 command_runner.go:130] > Change: 2024-06-25 16:23:19.759396760 +0000
	I0625 16:29:29.433456   54199 command_runner.go:130] >  Birth: 2024-06-25 16:23:19.759396760 +0000
	I0625 16:29:29.433497   54199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0625 16:29:29.438822   54199 command_runner.go:130] > Certificate will not expire
	I0625 16:29:29.439030   54199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0625 16:29:29.444380   54199 command_runner.go:130] > Certificate will not expire
	I0625 16:29:29.444432   54199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0625 16:29:29.449637   54199 command_runner.go:130] > Certificate will not expire
	I0625 16:29:29.449779   54199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0625 16:29:29.455346   54199 command_runner.go:130] > Certificate will not expire
	I0625 16:29:29.455403   54199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0625 16:29:29.460604   54199 command_runner.go:130] > Certificate will not expire
	I0625 16:29:29.460804   54199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0625 16:29:29.466092   54199 command_runner.go:130] > Certificate will not expire
	I0625 16:29:29.466150   54199 kubeadm.go:391] StartCluster: {Name:multinode-552402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-552402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.177 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:29:29.466246   54199 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0625 16:29:29.466288   54199 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0625 16:29:29.502737   54199 command_runner.go:130] > d4f00ecc70fc073d3550f6c89dbb15c1b77b863e7713a761a495c0274be411af
	I0625 16:29:29.502761   54199 command_runner.go:130] > 9e3cf9de6e7ead6b52148dcd4955b58900a7d8518f1f51123b6e1e3d75fcc3e1
	I0625 16:29:29.502774   54199 command_runner.go:130] > dada3d77ec88472cd180091e075101888927a1b93a58d88bd7378fbe100d3045
	I0625 16:29:29.502784   54199 command_runner.go:130] > f74159477c5e02e00dfb27d653217dc9b2d7693cee6730c6af252cf01c5572db
	I0625 16:29:29.502795   54199 command_runner.go:130] > 56b7ee056128dc759220644aa7dc88d47b282cf6f68c6ce88244ec9bef2de09c
	I0625 16:29:29.502805   54199 command_runner.go:130] > 79cd6519b497f35ff1e9ac8c6377ada466699c880f80fd08e64500e8964072a8
	I0625 16:29:29.502817   54199 command_runner.go:130] > 74a9d37ff49363320821cbe35e106f17871f1049d961ffc41b0531aeccfc735f
	I0625 16:29:29.502830   54199 command_runner.go:130] > bd920691a329ba6c3778d2ce3bfd1a1d43b9b4ecd0e0ebe6a6dc63bdfbbe887d
	I0625 16:29:29.502852   54199 cri.go:89] found id: "d4f00ecc70fc073d3550f6c89dbb15c1b77b863e7713a761a495c0274be411af"
	I0625 16:29:29.502863   54199 cri.go:89] found id: "9e3cf9de6e7ead6b52148dcd4955b58900a7d8518f1f51123b6e1e3d75fcc3e1"
	I0625 16:29:29.502868   54199 cri.go:89] found id: "dada3d77ec88472cd180091e075101888927a1b93a58d88bd7378fbe100d3045"
	I0625 16:29:29.502877   54199 cri.go:89] found id: "f74159477c5e02e00dfb27d653217dc9b2d7693cee6730c6af252cf01c5572db"
	I0625 16:29:29.502881   54199 cri.go:89] found id: "56b7ee056128dc759220644aa7dc88d47b282cf6f68c6ce88244ec9bef2de09c"
	I0625 16:29:29.502886   54199 cri.go:89] found id: "79cd6519b497f35ff1e9ac8c6377ada466699c880f80fd08e64500e8964072a8"
	I0625 16:29:29.502890   54199 cri.go:89] found id: "74a9d37ff49363320821cbe35e106f17871f1049d961ffc41b0531aeccfc735f"
	I0625 16:29:29.502896   54199 cri.go:89] found id: "bd920691a329ba6c3778d2ce3bfd1a1d43b9b4ecd0e0ebe6a6dc63bdfbbe887d"
	I0625 16:29:29.502901   54199 cri.go:89] found id: ""
	I0625 16:29:29.502946   54199 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.732320977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719333054732298298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98848ab0-db63-44fd-b090-cb805a2a4419 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.732831849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98abf234-c906-462e-bed1-45b00e8c0ab6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.732882893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98abf234-c906-462e-bed1-45b00e8c0ab6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.733303834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94303a695fe4d81b2707a7de43cdc991378b8299206a0e1e25904e2f455cb8ab,PodSandboxId:dd6de279d05d1dba66fe6175dab37b54fea09d279e06975e7e5cca2e3ca47324,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719333009536563017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40d59741ee01b451097aa7966de7d23a2e74c39a2622e3cc802154ffc4dd4c53,PodSandboxId:4d3d31c83b9c757d945ce1f380567d2cb0c636493ba55e3ae8c045f93ac76ee5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719332975911802419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8647b618ee7b9b7796293cfccaaa79f29452b9ad19f19bd4bf4f5371f911f3ad,PodSandboxId:bda3a47a30d8c1b6ae2548a0a982958dcfcad03512d970765af86cff6f824b35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719332975759627375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97a7bd7222a87e610f58087561023d776edcd8cbb43a5a5b9c57657b895ccf,PodSandboxId:c5027149117e2f151cd3d190cc9399c7c7b8c5d3af1865417001d03e9c5b028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719332975729150194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8462e7859192761c30c0ab03423aa6ffa0af7ab3f9a1b1ac724a99b2c73716b,PodSandboxId:a978ac88feb2ac6cc9734d24177b98dd5aefd1a45e60d6bb4aca9fe8ec6fc6ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1719332975729949551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.ku
bernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3410e4d3e33976711626b2e78ed9f2c95d4fab7ae14ffb4db21293db4b1d5d00,PodSandboxId:15e75468ce45230526d0e92a918e1a217a5b2d1f8111666256f12218b2c3f769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719332971961179305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf530af755b58043cd84f310c01986cfe5f2a354d4e6102e40d465ec3a96a81,PodSandboxId:bc9f7cf553f6aa5358b4ec70c5be99fd89a1e6145d4a0076995e42adb43ea697,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719332971906857100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.container.hash: 9e04
68f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4bf60afb2ebac2743296dbe43222df97b74f259f9aa5564423d6b35335f325,PodSandboxId:997e31d954726ed3eba59fdd19135300af4e25306f848e18746fb071a6134919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719332971924247628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.container.hash: 12806e87,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c383a4dc6b4ce55513c013b99411811ae775392a7c5c2ecd9c50299edf98bf,PodSandboxId:993e8b320da8aad2b7faf8f09b45956526e3c9cec836c71b3f757156675ff381,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719332971887729272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26129c27a9df69b5bad2e9ad7b5b053e3daf66ccb1a2833c454b8b33c3901d8,PodSandboxId:9cf1c28407eedb9fe47ee75a4593d7653ba0012a2854cccf4619962ab2543533,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719332672429667487,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f00ecc70fc073d3550f6c89dbb15c1b77b863e7713a761a495c0274be411af,PodSandboxId:45dca2bbc9e761cebbeaf38b9b0f82b6802937057683876c4cd34dcf4658440d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719332625591518494,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.kubernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3cf9de6e7ead6b52148dcd4955b58900a7d8518f1f51123b6e1e3d75fcc3e1,PodSandboxId:3461599c9ae5b8084dc3c9eae4f23cc1ab079ad7f03de781355e8d350fd7461b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719332624731045573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dada3d77ec88472cd180091e075101888927a1b93a58d88bd7378fbe100d3045,PodSandboxId:7ca324582eef881fb3ee2a303c68dafc8088ead0efee3c38ca177db602c9a6f3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719332623000929635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74159477c5e02e00dfb27d653217dc9b2d7693cee6730c6af252cf01c5572db,PodSandboxId:948aee8fb658d4e608304b1783868152c397c5980937eb797efaa066360d130e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719332622671325347,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-
62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b7ee056128dc759220644aa7dc88d47b282cf6f68c6ce88244ec9bef2de09c,PodSandboxId:8c5a93cba3030028a9fda40545ca2e8a936cc10e424196a543be22574fde5ec5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719332603272043176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},
Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a9d37ff49363320821cbe35e106f17871f1049d961ffc41b0531aeccfc735f,PodSandboxId:08a5de5a0d950dd3b55524a12fd016dc0f5529ddd3b71786c7a561ba6c073767,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719332603209821793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 12806e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cd6519b497f35ff1e9ac8c6377ada466699c880f80fd08e64500e8964072a8,PodSandboxId:983a83971fdcd6758a676a322438c8b91d38d2bba42eee049e2f037f17b9b2e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719332603220711629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd920691a329ba6c3778d2ce3bfd1a1d43b9b4ecd0e0ebe6a6dc63bdfbbe887d,PodSandboxId:f4a086dccd71fd3a824b232f8e9cb32d36de35cfc549217ff7057c61c47d9eed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719332603171686785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98abf234-c906-462e-bed1-45b00e8c0ab6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.773881544Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a96d2f89-4553-407e-948e-4b4c357789b1 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.773953517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a96d2f89-4553-407e-948e-4b4c357789b1 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.774799621Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9169300-cc17-48d1-a4d6-8df0e3ec7680 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.775255213Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719333054775231667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9169300-cc17-48d1-a4d6-8df0e3ec7680 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.775740196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19409833-916e-4b87-bcf0-47979c2e6bab name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.775797171Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19409833-916e-4b87-bcf0-47979c2e6bab name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.776231816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94303a695fe4d81b2707a7de43cdc991378b8299206a0e1e25904e2f455cb8ab,PodSandboxId:dd6de279d05d1dba66fe6175dab37b54fea09d279e06975e7e5cca2e3ca47324,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719333009536563017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40d59741ee01b451097aa7966de7d23a2e74c39a2622e3cc802154ffc4dd4c53,PodSandboxId:4d3d31c83b9c757d945ce1f380567d2cb0c636493ba55e3ae8c045f93ac76ee5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719332975911802419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8647b618ee7b9b7796293cfccaaa79f29452b9ad19f19bd4bf4f5371f911f3ad,PodSandboxId:bda3a47a30d8c1b6ae2548a0a982958dcfcad03512d970765af86cff6f824b35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719332975759627375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97a7bd7222a87e610f58087561023d776edcd8cbb43a5a5b9c57657b895ccf,PodSandboxId:c5027149117e2f151cd3d190cc9399c7c7b8c5d3af1865417001d03e9c5b028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719332975729150194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8462e7859192761c30c0ab03423aa6ffa0af7ab3f9a1b1ac724a99b2c73716b,PodSandboxId:a978ac88feb2ac6cc9734d24177b98dd5aefd1a45e60d6bb4aca9fe8ec6fc6ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1719332975729949551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.ku
bernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3410e4d3e33976711626b2e78ed9f2c95d4fab7ae14ffb4db21293db4b1d5d00,PodSandboxId:15e75468ce45230526d0e92a918e1a217a5b2d1f8111666256f12218b2c3f769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719332971961179305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf530af755b58043cd84f310c01986cfe5f2a354d4e6102e40d465ec3a96a81,PodSandboxId:bc9f7cf553f6aa5358b4ec70c5be99fd89a1e6145d4a0076995e42adb43ea697,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719332971906857100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.container.hash: 9e04
68f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4bf60afb2ebac2743296dbe43222df97b74f259f9aa5564423d6b35335f325,PodSandboxId:997e31d954726ed3eba59fdd19135300af4e25306f848e18746fb071a6134919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719332971924247628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.container.hash: 12806e87,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c383a4dc6b4ce55513c013b99411811ae775392a7c5c2ecd9c50299edf98bf,PodSandboxId:993e8b320da8aad2b7faf8f09b45956526e3c9cec836c71b3f757156675ff381,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719332971887729272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26129c27a9df69b5bad2e9ad7b5b053e3daf66ccb1a2833c454b8b33c3901d8,PodSandboxId:9cf1c28407eedb9fe47ee75a4593d7653ba0012a2854cccf4619962ab2543533,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719332672429667487,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f00ecc70fc073d3550f6c89dbb15c1b77b863e7713a761a495c0274be411af,PodSandboxId:45dca2bbc9e761cebbeaf38b9b0f82b6802937057683876c4cd34dcf4658440d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719332625591518494,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.kubernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3cf9de6e7ead6b52148dcd4955b58900a7d8518f1f51123b6e1e3d75fcc3e1,PodSandboxId:3461599c9ae5b8084dc3c9eae4f23cc1ab079ad7f03de781355e8d350fd7461b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719332624731045573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dada3d77ec88472cd180091e075101888927a1b93a58d88bd7378fbe100d3045,PodSandboxId:7ca324582eef881fb3ee2a303c68dafc8088ead0efee3c38ca177db602c9a6f3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719332623000929635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74159477c5e02e00dfb27d653217dc9b2d7693cee6730c6af252cf01c5572db,PodSandboxId:948aee8fb658d4e608304b1783868152c397c5980937eb797efaa066360d130e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719332622671325347,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-
62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b7ee056128dc759220644aa7dc88d47b282cf6f68c6ce88244ec9bef2de09c,PodSandboxId:8c5a93cba3030028a9fda40545ca2e8a936cc10e424196a543be22574fde5ec5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719332603272043176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},
Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a9d37ff49363320821cbe35e106f17871f1049d961ffc41b0531aeccfc735f,PodSandboxId:08a5de5a0d950dd3b55524a12fd016dc0f5529ddd3b71786c7a561ba6c073767,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719332603209821793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 12806e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cd6519b497f35ff1e9ac8c6377ada466699c880f80fd08e64500e8964072a8,PodSandboxId:983a83971fdcd6758a676a322438c8b91d38d2bba42eee049e2f037f17b9b2e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719332603220711629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd920691a329ba6c3778d2ce3bfd1a1d43b9b4ecd0e0ebe6a6dc63bdfbbe887d,PodSandboxId:f4a086dccd71fd3a824b232f8e9cb32d36de35cfc549217ff7057c61c47d9eed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719332603171686785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19409833-916e-4b87-bcf0-47979c2e6bab name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.817482068Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7d4f345-cf00-4a80-8eec-93af17b365dc name=/runtime.v1.RuntimeService/Version
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.817556334Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7d4f345-cf00-4a80-8eec-93af17b365dc name=/runtime.v1.RuntimeService/Version
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.818591701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e510e9ae-42ca-466d-b107-1c2f38a182e6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.819410021Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719333054819384284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e510e9ae-42ca-466d-b107-1c2f38a182e6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.819862833Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44bdb28e-8716-428d-a301-dbd5d6ebfa5f name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.819912938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44bdb28e-8716-428d-a301-dbd5d6ebfa5f name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.821262077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94303a695fe4d81b2707a7de43cdc991378b8299206a0e1e25904e2f455cb8ab,PodSandboxId:dd6de279d05d1dba66fe6175dab37b54fea09d279e06975e7e5cca2e3ca47324,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719333009536563017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40d59741ee01b451097aa7966de7d23a2e74c39a2622e3cc802154ffc4dd4c53,PodSandboxId:4d3d31c83b9c757d945ce1f380567d2cb0c636493ba55e3ae8c045f93ac76ee5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719332975911802419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8647b618ee7b9b7796293cfccaaa79f29452b9ad19f19bd4bf4f5371f911f3ad,PodSandboxId:bda3a47a30d8c1b6ae2548a0a982958dcfcad03512d970765af86cff6f824b35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719332975759627375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97a7bd7222a87e610f58087561023d776edcd8cbb43a5a5b9c57657b895ccf,PodSandboxId:c5027149117e2f151cd3d190cc9399c7c7b8c5d3af1865417001d03e9c5b028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719332975729150194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8462e7859192761c30c0ab03423aa6ffa0af7ab3f9a1b1ac724a99b2c73716b,PodSandboxId:a978ac88feb2ac6cc9734d24177b98dd5aefd1a45e60d6bb4aca9fe8ec6fc6ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1719332975729949551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.ku
bernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3410e4d3e33976711626b2e78ed9f2c95d4fab7ae14ffb4db21293db4b1d5d00,PodSandboxId:15e75468ce45230526d0e92a918e1a217a5b2d1f8111666256f12218b2c3f769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719332971961179305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf530af755b58043cd84f310c01986cfe5f2a354d4e6102e40d465ec3a96a81,PodSandboxId:bc9f7cf553f6aa5358b4ec70c5be99fd89a1e6145d4a0076995e42adb43ea697,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719332971906857100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.container.hash: 9e04
68f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4bf60afb2ebac2743296dbe43222df97b74f259f9aa5564423d6b35335f325,PodSandboxId:997e31d954726ed3eba59fdd19135300af4e25306f848e18746fb071a6134919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719332971924247628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.container.hash: 12806e87,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c383a4dc6b4ce55513c013b99411811ae775392a7c5c2ecd9c50299edf98bf,PodSandboxId:993e8b320da8aad2b7faf8f09b45956526e3c9cec836c71b3f757156675ff381,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719332971887729272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26129c27a9df69b5bad2e9ad7b5b053e3daf66ccb1a2833c454b8b33c3901d8,PodSandboxId:9cf1c28407eedb9fe47ee75a4593d7653ba0012a2854cccf4619962ab2543533,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719332672429667487,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f00ecc70fc073d3550f6c89dbb15c1b77b863e7713a761a495c0274be411af,PodSandboxId:45dca2bbc9e761cebbeaf38b9b0f82b6802937057683876c4cd34dcf4658440d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719332625591518494,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.kubernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3cf9de6e7ead6b52148dcd4955b58900a7d8518f1f51123b6e1e3d75fcc3e1,PodSandboxId:3461599c9ae5b8084dc3c9eae4f23cc1ab079ad7f03de781355e8d350fd7461b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719332624731045573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dada3d77ec88472cd180091e075101888927a1b93a58d88bd7378fbe100d3045,PodSandboxId:7ca324582eef881fb3ee2a303c68dafc8088ead0efee3c38ca177db602c9a6f3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719332623000929635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74159477c5e02e00dfb27d653217dc9b2d7693cee6730c6af252cf01c5572db,PodSandboxId:948aee8fb658d4e608304b1783868152c397c5980937eb797efaa066360d130e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719332622671325347,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-
62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b7ee056128dc759220644aa7dc88d47b282cf6f68c6ce88244ec9bef2de09c,PodSandboxId:8c5a93cba3030028a9fda40545ca2e8a936cc10e424196a543be22574fde5ec5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719332603272043176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},
Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a9d37ff49363320821cbe35e106f17871f1049d961ffc41b0531aeccfc735f,PodSandboxId:08a5de5a0d950dd3b55524a12fd016dc0f5529ddd3b71786c7a561ba6c073767,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719332603209821793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 12806e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cd6519b497f35ff1e9ac8c6377ada466699c880f80fd08e64500e8964072a8,PodSandboxId:983a83971fdcd6758a676a322438c8b91d38d2bba42eee049e2f037f17b9b2e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719332603220711629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd920691a329ba6c3778d2ce3bfd1a1d43b9b4ecd0e0ebe6a6dc63bdfbbe887d,PodSandboxId:f4a086dccd71fd3a824b232f8e9cb32d36de35cfc549217ff7057c61c47d9eed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719332603171686785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44bdb28e-8716-428d-a301-dbd5d6ebfa5f name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.865605313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d2b10f88-770b-41df-83b8-ed9d69a83ec8 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.865675406Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d2b10f88-770b-41df-83b8-ed9d69a83ec8 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.866703226Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd233a1a-e686-43a9-a925-2998474e1c41 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.867435554Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719333054867412461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd233a1a-e686-43a9-a925-2998474e1c41 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.868064887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebe201a4-af0a-463d-a0b5-1d0474cff94f name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.868140134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebe201a4-af0a-463d-a0b5-1d0474cff94f name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:30:54 multinode-552402 crio[2864]: time="2024-06-25 16:30:54.868525532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94303a695fe4d81b2707a7de43cdc991378b8299206a0e1e25904e2f455cb8ab,PodSandboxId:dd6de279d05d1dba66fe6175dab37b54fea09d279e06975e7e5cca2e3ca47324,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719333009536563017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40d59741ee01b451097aa7966de7d23a2e74c39a2622e3cc802154ffc4dd4c53,PodSandboxId:4d3d31c83b9c757d945ce1f380567d2cb0c636493ba55e3ae8c045f93ac76ee5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719332975911802419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8647b618ee7b9b7796293cfccaaa79f29452b9ad19f19bd4bf4f5371f911f3ad,PodSandboxId:bda3a47a30d8c1b6ae2548a0a982958dcfcad03512d970765af86cff6f824b35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719332975759627375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97a7bd7222a87e610f58087561023d776edcd8cbb43a5a5b9c57657b895ccf,PodSandboxId:c5027149117e2f151cd3d190cc9399c7c7b8c5d3af1865417001d03e9c5b028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719332975729150194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8462e7859192761c30c0ab03423aa6ffa0af7ab3f9a1b1ac724a99b2c73716b,PodSandboxId:a978ac88feb2ac6cc9734d24177b98dd5aefd1a45e60d6bb4aca9fe8ec6fc6ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1719332975729949551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.ku
bernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3410e4d3e33976711626b2e78ed9f2c95d4fab7ae14ffb4db21293db4b1d5d00,PodSandboxId:15e75468ce45230526d0e92a918e1a217a5b2d1f8111666256f12218b2c3f769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719332971961179305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf530af755b58043cd84f310c01986cfe5f2a354d4e6102e40d465ec3a96a81,PodSandboxId:bc9f7cf553f6aa5358b4ec70c5be99fd89a1e6145d4a0076995e42adb43ea697,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719332971906857100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.container.hash: 9e04
68f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4bf60afb2ebac2743296dbe43222df97b74f259f9aa5564423d6b35335f325,PodSandboxId:997e31d954726ed3eba59fdd19135300af4e25306f848e18746fb071a6134919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719332971924247628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.container.hash: 12806e87,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c383a4dc6b4ce55513c013b99411811ae775392a7c5c2ecd9c50299edf98bf,PodSandboxId:993e8b320da8aad2b7faf8f09b45956526e3c9cec836c71b3f757156675ff381,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719332971887729272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26129c27a9df69b5bad2e9ad7b5b053e3daf66ccb1a2833c454b8b33c3901d8,PodSandboxId:9cf1c28407eedb9fe47ee75a4593d7653ba0012a2854cccf4619962ab2543533,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719332672429667487,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f00ecc70fc073d3550f6c89dbb15c1b77b863e7713a761a495c0274be411af,PodSandboxId:45dca2bbc9e761cebbeaf38b9b0f82b6802937057683876c4cd34dcf4658440d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719332625591518494,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.kubernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3cf9de6e7ead6b52148dcd4955b58900a7d8518f1f51123b6e1e3d75fcc3e1,PodSandboxId:3461599c9ae5b8084dc3c9eae4f23cc1ab079ad7f03de781355e8d350fd7461b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719332624731045573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dada3d77ec88472cd180091e075101888927a1b93a58d88bd7378fbe100d3045,PodSandboxId:7ca324582eef881fb3ee2a303c68dafc8088ead0efee3c38ca177db602c9a6f3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719332623000929635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74159477c5e02e00dfb27d653217dc9b2d7693cee6730c6af252cf01c5572db,PodSandboxId:948aee8fb658d4e608304b1783868152c397c5980937eb797efaa066360d130e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719332622671325347,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-
62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b7ee056128dc759220644aa7dc88d47b282cf6f68c6ce88244ec9bef2de09c,PodSandboxId:8c5a93cba3030028a9fda40545ca2e8a936cc10e424196a543be22574fde5ec5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719332603272043176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},
Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a9d37ff49363320821cbe35e106f17871f1049d961ffc41b0531aeccfc735f,PodSandboxId:08a5de5a0d950dd3b55524a12fd016dc0f5529ddd3b71786c7a561ba6c073767,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719332603209821793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 12806e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cd6519b497f35ff1e9ac8c6377ada466699c880f80fd08e64500e8964072a8,PodSandboxId:983a83971fdcd6758a676a322438c8b91d38d2bba42eee049e2f037f17b9b2e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719332603220711629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd920691a329ba6c3778d2ce3bfd1a1d43b9b4ecd0e0ebe6a6dc63bdfbbe887d,PodSandboxId:f4a086dccd71fd3a824b232f8e9cb32d36de35cfc549217ff7057c61c47d9eed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719332603171686785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebe201a4-af0a-463d-a0b5-1d0474cff94f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	94303a695fe4d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      45 seconds ago       Running             busybox                   1                   dd6de279d05d1       busybox-fc5497c4f-97579
	40d59741ee01b       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               1                   4d3d31c83b9c7       kindnet-6ctrk
	8647b618ee7b9       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      About a minute ago   Running             kube-proxy                1                   bda3a47a30d8c       kube-proxy-nphd7
	a8462e7859192       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   a978ac88feb2a       storage-provisioner
	ca97a7bd7222a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   c5027149117e2       coredns-7db6d8ff4d-jf2ds
	3410e4d3e3397       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      About a minute ago   Running             kube-scheduler            1                   15e75468ce452       kube-scheduler-multinode-552402
	ea4bf60afb2eb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   997e31d954726       etcd-multinode-552402
	9bf530af755b5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      About a minute ago   Running             kube-apiserver            1                   bc9f7cf553f6a       kube-apiserver-multinode-552402
	90c383a4dc6b4       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      About a minute ago   Running             kube-controller-manager   1                   993e8b320da8a       kube-controller-manager-multinode-552402
	a26129c27a9df       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   9cf1c28407eed       busybox-fc5497c4f-97579
	d4f00ecc70fc0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   45dca2bbc9e76       storage-provisioner
	9e3cf9de6e7ea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   3461599c9ae5b       coredns-7db6d8ff4d-jf2ds
	dada3d77ec884       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      7 minutes ago        Exited              kindnet-cni               0                   7ca324582eef8       kindnet-6ctrk
	f74159477c5e0       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      7 minutes ago        Exited              kube-proxy                0                   948aee8fb658d       kube-proxy-nphd7
	56b7ee056128d       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      7 minutes ago        Exited              kube-scheduler            0                   8c5a93cba3030       kube-scheduler-multinode-552402
	79cd6519b497f       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      7 minutes ago        Exited              kube-controller-manager   0                   983a83971fdcd       kube-controller-manager-multinode-552402
	74a9d37ff4936       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   08a5de5a0d950       etcd-multinode-552402
	bd920691a329b       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      7 minutes ago        Exited              kube-apiserver            0                   f4a086dccd71f       kube-apiserver-multinode-552402
	
	
	==> coredns [9e3cf9de6e7ead6b52148dcd4955b58900a7d8518f1f51123b6e1e3d75fcc3e1] <==
	[INFO] 10.244.0.3:41558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001760917s
	[INFO] 10.244.0.3:36998 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085187s
	[INFO] 10.244.0.3:38483 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006269s
	[INFO] 10.244.0.3:59326 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001054349s
	[INFO] 10.244.0.3:35112 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075641s
	[INFO] 10.244.0.3:56628 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00004065s
	[INFO] 10.244.0.3:54400 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090516s
	[INFO] 10.244.1.2:58618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169496s
	[INFO] 10.244.1.2:40742 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148609s
	[INFO] 10.244.1.2:44795 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085688s
	[INFO] 10.244.1.2:58327 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176335s
	[INFO] 10.244.0.3:57422 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095233s
	[INFO] 10.244.0.3:51491 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000435s
	[INFO] 10.244.0.3:57623 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000032653s
	[INFO] 10.244.0.3:37188 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031269s
	[INFO] 10.244.1.2:36422 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117611s
	[INFO] 10.244.1.2:36651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134098s
	[INFO] 10.244.1.2:58096 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105309s
	[INFO] 10.244.1.2:42834 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000090153s
	[INFO] 10.244.0.3:51313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116702s
	[INFO] 10.244.0.3:60500 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127251s
	[INFO] 10.244.0.3:35244 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074486s
	[INFO] 10.244.0.3:56809 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073025s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ca97a7bd7222a87e610f58087561023d776edcd8cbb43a5a5b9c57657b895ccf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36762 - 20673 "HINFO IN 3927945517368221176.1344637245483628756. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026407803s
	
	
	==> describe nodes <==
	Name:               multinode-552402
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-552402
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=multinode-552402
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_25T16_23_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 16:23:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-552402
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:30:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 16:29:35 +0000   Tue, 25 Jun 2024 16:23:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 16:29:35 +0000   Tue, 25 Jun 2024 16:23:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 16:29:35 +0000   Tue, 25 Jun 2024 16:23:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 16:29:35 +0000   Tue, 25 Jun 2024 16:23:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    multinode-552402
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6025096499d94c939deba1c860e7c4b7
	  System UUID:                60250964-99d9-4c93-9deb-a1c860e7c4b7
	  Boot ID:                    108b3034-f86c-45ec-b474-7e364c281e50
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-97579                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 coredns-7db6d8ff4d-jf2ds                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m13s
	  kube-system                 etcd-multinode-552402                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m27s
	  kube-system                 kindnet-6ctrk                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m14s
	  kube-system                 kube-apiserver-multinode-552402             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	  kube-system                 kube-controller-manager-multinode-552402    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	  kube-system                 kube-proxy-nphd7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	  kube-system                 kube-scheduler-multinode-552402             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m12s                  kube-proxy       
	  Normal  Starting                 79s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  7m33s (x8 over 7m33s)  kubelet          Node multinode-552402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m33s (x8 over 7m33s)  kubelet          Node multinode-552402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m33s (x7 over 7m33s)  kubelet          Node multinode-552402 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m27s                  kubelet          Node multinode-552402 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  7m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m27s                  kubelet          Node multinode-552402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m27s                  kubelet          Node multinode-552402 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m27s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m14s                  node-controller  Node multinode-552402 event: Registered Node multinode-552402 in Controller
	  Normal  NodeReady                7m11s                  kubelet          Node multinode-552402 status is now: NodeReady
	  Normal  Starting                 84s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s (x8 over 84s)      kubelet          Node multinode-552402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x8 over 84s)      kubelet          Node multinode-552402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x7 over 84s)      kubelet          Node multinode-552402 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           68s                    node-controller  Node multinode-552402 event: Registered Node multinode-552402 in Controller
	
	
	Name:               multinode-552402-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-552402-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=multinode-552402
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_25T16_30_14_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 16:30:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-552402-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:30:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 16:30:44 +0000   Tue, 25 Jun 2024 16:30:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 16:30:44 +0000   Tue, 25 Jun 2024 16:30:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 16:30:44 +0000   Tue, 25 Jun 2024 16:30:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 16:30:44 +0000   Tue, 25 Jun 2024 16:30:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.166
	  Hostname:    multinode-552402-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8493e28109fb4b659052d6c402f92bc8
	  System UUID:                8493e281-09fb-4b65-9052-d6c402f92bc8
	  Boot ID:                    4fc2f8d8-df56-4daa-875b-0a9e67c6fe47
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vdl68    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 kindnet-djmlv              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m38s
	  kube-system                 kube-proxy-774kb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m33s                  kube-proxy  
	  Normal  Starting                 36s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m38s (x2 over 6m38s)  kubelet     Node multinode-552402-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m38s (x2 over 6m38s)  kubelet     Node multinode-552402-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m38s (x2 over 6m38s)  kubelet     Node multinode-552402-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m38s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m29s                  kubelet     Node multinode-552402-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  41s (x2 over 41s)      kubelet     Node multinode-552402-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x2 over 41s)      kubelet     Node multinode-552402-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x2 over 41s)      kubelet     Node multinode-552402-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  41s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                32s                    kubelet     Node multinode-552402-m02 status is now: NodeReady
	
	
	Name:               multinode-552402-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-552402-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=multinode-552402
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_25T16_30_43_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 16:30:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-552402-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:30:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 16:30:51 +0000   Tue, 25 Jun 2024 16:30:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 16:30:51 +0000   Tue, 25 Jun 2024 16:30:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 16:30:51 +0000   Tue, 25 Jun 2024 16:30:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 16:30:51 +0000   Tue, 25 Jun 2024 16:30:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.177
	  Hostname:    multinode-552402-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4b6b7e963c4489297578f3cbbd5baf7
	  System UUID:                a4b6b7e9-63c4-4892-9757-8f3cbbd5baf7
	  Boot ID:                    0748866d-2822-4f6e-be06-c153fa05333c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-h2txx       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m55s
	  kube-system                 kube-proxy-pr9ph    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m10s                  kube-proxy       
	  Normal  Starting                 5m50s                  kube-proxy       
	  Normal  Starting                 7s                     kube-proxy       
	  Normal  NodeHasSufficientMemory  5m55s (x2 over 5m55s)  kubelet          Node multinode-552402-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s (x2 over 5m55s)  kubelet          Node multinode-552402-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s (x2 over 5m55s)  kubelet          Node multinode-552402-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m45s                  kubelet          Node multinode-552402-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m16s (x2 over 5m16s)  kubelet          Node multinode-552402-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x2 over 5m16s)  kubelet          Node multinode-552402-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m16s (x2 over 5m16s)  kubelet          Node multinode-552402-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m7s                   kubelet          Node multinode-552402-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12s (x2 over 12s)      kubelet          Node multinode-552402-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 12s)      kubelet          Node multinode-552402-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 12s)      kubelet          Node multinode-552402-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                     node-controller  Node multinode-552402-m03 event: Registered Node multinode-552402-m03 in Controller
	  Normal  NodeReady                4s                     kubelet          Node multinode-552402-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[ +10.618814] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.056074] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075403] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.192282] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.120152] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.267734] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.040799] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.984264] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.061794] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.990240] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.074452] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.414532] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.349588] systemd-fstab-generator[1567]: Ignoring "noauto" option for root device
	[Jun25 16:24] kauditd_printk_skb: 84 callbacks suppressed
	[Jun25 16:29] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.152235] systemd-fstab-generator[2790]: Ignoring "noauto" option for root device
	[  +0.159576] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.144894] systemd-fstab-generator[2816]: Ignoring "noauto" option for root device
	[  +0.273293] systemd-fstab-generator[2844]: Ignoring "noauto" option for root device
	[  +2.513212] systemd-fstab-generator[2948]: Ignoring "noauto" option for root device
	[  +2.017196] systemd-fstab-generator[3072]: Ignoring "noauto" option for root device
	[  +0.078915] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.019523] kauditd_printk_skb: 87 callbacks suppressed
	[ +13.511272] systemd-fstab-generator[3883]: Ignoring "noauto" option for root device
	[Jun25 16:30] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [74a9d37ff49363320821cbe35e106f17871f1049d961ffc41b0531aeccfc735f] <==
	{"level":"info","ts":"2024-06-25T16:24:17.616265Z","caller":"traceutil/trace.go:171","msg":"trace[912443765] range","detail":"{range_begin:/registry/minions/multinode-552402-m02; range_end:; response_count:0; response_revision:487; }","duration":"247.931513ms","start":"2024-06-25T16:24:17.368326Z","end":"2024-06-25T16:24:17.616258Z","steps":["trace[912443765] 'agreement among raft nodes before linearized reading'  (duration: 247.790258ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-25T16:24:17.618213Z","caller":"traceutil/trace.go:171","msg":"trace[2080603250] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"199.44407ms","start":"2024-06-25T16:24:17.418759Z","end":"2024-06-25T16:24:17.618203Z","steps":["trace[2080603250] 'process raft request'  (duration: 199.270329ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-25T16:25:00.434392Z","caller":"traceutil/trace.go:171","msg":"trace[1383212026] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"240.395203ms","start":"2024-06-25T16:25:00.193944Z","end":"2024-06-25T16:25:00.434339Z","steps":["trace[1383212026] 'process raft request'  (duration: 239.171465ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-25T16:25:00.434633Z","caller":"traceutil/trace.go:171","msg":"trace[1746066410] linearizableReadLoop","detail":"{readStateIndex:645; appliedIndex:643; }","duration":"143.774553ms","start":"2024-06-25T16:25:00.290832Z","end":"2024-06-25T16:25:00.434607Z","steps":["trace[1746066410] 'read index received'  (duration: 142.291698ms)","trace[1746066410] 'applied index is now lower than readState.Index'  (duration: 1.482442ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-25T16:25:00.434845Z","caller":"traceutil/trace.go:171","msg":"trace[1541619421] transaction","detail":"{read_only:false; response_revision:614; number_of_response:1; }","duration":"173.795813ms","start":"2024-06-25T16:25:00.26104Z","end":"2024-06-25T16:25:00.434836Z","steps":["trace[1541619421] 'process raft request'  (duration: 173.513905ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:25:00.435134Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.259219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-25T16:25:00.435196Z","caller":"traceutil/trace.go:171","msg":"trace[1513403294] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:614; }","duration":"144.374051ms","start":"2024-06-25T16:25:00.290811Z","end":"2024-06-25T16:25:00.435185Z","steps":["trace[1513403294] 'agreement among raft nodes before linearized reading'  (duration: 144.240839ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:25:00.435133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.282524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-552402-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-06-25T16:25:00.435341Z","caller":"traceutil/trace.go:171","msg":"trace[1022535190] range","detail":"{range_begin:/registry/minions/multinode-552402-m03; range_end:; response_count:1; response_revision:614; }","duration":"110.527043ms","start":"2024-06-25T16:25:00.324804Z","end":"2024-06-25T16:25:00.435332Z","steps":["trace[1022535190] 'agreement among raft nodes before linearized reading'  (duration: 110.260786ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-25T16:25:10.914592Z","caller":"traceutil/trace.go:171","msg":"trace[1545405513] transaction","detail":"{read_only:false; response_revision:666; number_of_response:1; }","duration":"112.790632ms","start":"2024-06-25T16:25:10.801785Z","end":"2024-06-25T16:25:10.914576Z","steps":["trace[1545405513] 'process raft request'  (duration: 112.705959ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:25:11.098219Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.430779ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3062606420781749424 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-552402\" mod_revision:635 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-552402\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-552402\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-25T16:25:11.098353Z","caller":"traceutil/trace.go:171","msg":"trace[1466286931] linearizableReadLoop","detail":"{readStateIndex:704; appliedIndex:703; }","duration":"223.490887ms","start":"2024-06-25T16:25:10.874852Z","end":"2024-06-25T16:25:11.098343Z","steps":["trace[1466286931] 'read index received'  (duration: 40.200697ms)","trace[1466286931] 'applied index is now lower than readState.Index'  (duration: 183.289084ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-25T16:25:11.098401Z","caller":"traceutil/trace.go:171","msg":"trace[1623922951] transaction","detail":"{read_only:false; response_revision:667; number_of_response:1; }","duration":"291.279186ms","start":"2024-06-25T16:25:10.807106Z","end":"2024-06-25T16:25:11.098385Z","steps":["trace[1623922951] 'process raft request'  (duration: 161.282799ms)","trace[1623922951] 'compare'  (duration: 129.370345ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-25T16:25:11.09852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.661211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-552402-m02\" ","response":"range_response_count:1 size:3935"}
	{"level":"info","ts":"2024-06-25T16:25:11.098559Z","caller":"traceutil/trace.go:171","msg":"trace[813164059] range","detail":"{range_begin:/registry/minions/multinode-552402-m02; range_end:; response_count:1; response_revision:667; }","duration":"223.723279ms","start":"2024-06-25T16:25:10.874829Z","end":"2024-06-25T16:25:11.098552Z","steps":["trace[813164059] 'agreement among raft nodes before linearized reading'  (duration: 223.56643ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-25T16:27:54.400673Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-25T16:27:54.400797Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-552402","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.231:2380"],"advertise-client-urls":["https://192.168.39.231:2379"]}
	{"level":"warn","ts":"2024-06-25T16:27:54.400887Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.231:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-25T16:27:54.400923Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.231:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-25T16:27:54.40106Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-25T16:27:54.401124Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-25T16:27:54.435399Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6a82bbfd8eee2a80","current-leader-member-id":"6a82bbfd8eee2a80"}
	{"level":"info","ts":"2024-06-25T16:27:54.44147Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.231:2380"}
	{"level":"info","ts":"2024-06-25T16:27:54.44158Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.231:2380"}
	{"level":"info","ts":"2024-06-25T16:27:54.441592Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-552402","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.231:2380"],"advertise-client-urls":["https://192.168.39.231:2379"]}
	
	
	==> etcd [ea4bf60afb2ebac2743296dbe43222df97b74f259f9aa5564423d6b35335f325] <==
	{"level":"info","ts":"2024-06-25T16:29:32.383313Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-25T16:29:32.383365Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-25T16:29:32.383381Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-25T16:29:32.383724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 switched to configuration voters=(7674903412691839616)"}
	{"level":"info","ts":"2024-06-25T16:29:32.383798Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1a20717615099fdd","local-member-id":"6a82bbfd8eee2a80","added-peer-id":"6a82bbfd8eee2a80","added-peer-peer-urls":["https://192.168.39.231:2380"]}
	{"level":"info","ts":"2024-06-25T16:29:32.384019Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1a20717615099fdd","local-member-id":"6a82bbfd8eee2a80","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-25T16:29:32.384061Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-25T16:29:32.390025Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.231:2380"}
	{"level":"info","ts":"2024-06-25T16:29:32.39006Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.231:2380"}
	{"level":"info","ts":"2024-06-25T16:29:32.390278Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6a82bbfd8eee2a80","initial-advertise-peer-urls":["https://192.168.39.231:2380"],"listen-peer-urls":["https://192.168.39.231:2380"],"advertise-client-urls":["https://192.168.39.231:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.231:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-25T16:29:32.390326Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-25T16:29:33.724862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-25T16:29:33.724922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-25T16:29:33.725023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 received MsgPreVoteResp from 6a82bbfd8eee2a80 at term 2"}
	{"level":"info","ts":"2024-06-25T16:29:33.725042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 became candidate at term 3"}
	{"level":"info","ts":"2024-06-25T16:29:33.725047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 received MsgVoteResp from 6a82bbfd8eee2a80 at term 3"}
	{"level":"info","ts":"2024-06-25T16:29:33.725056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 became leader at term 3"}
	{"level":"info","ts":"2024-06-25T16:29:33.725067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6a82bbfd8eee2a80 elected leader 6a82bbfd8eee2a80 at term 3"}
	{"level":"info","ts":"2024-06-25T16:29:33.729471Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6a82bbfd8eee2a80","local-member-attributes":"{Name:multinode-552402 ClientURLs:[https://192.168.39.231:2379]}","request-path":"/0/members/6a82bbfd8eee2a80/attributes","cluster-id":"1a20717615099fdd","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-25T16:29:33.72964Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-25T16:29:33.729659Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-25T16:29:33.729874Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-25T16:29:33.729905Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-25T16:29:33.731862Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.231:2379"}
	{"level":"info","ts":"2024-06-25T16:29:33.733489Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 16:30:55 up 8 min,  0 users,  load average: 0.90, 0.38, 0.15
	Linux multinode-552402 5.10.207 #1 SMP Mon Jun 24 21:03:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [40d59741ee01b451097aa7966de7d23a2e74c39a2622e3cc802154ffc4dd4c53] <==
	I0625 16:30:06.686260       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.3.0/24] 
	I0625 16:30:16.699845       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:30:16.699898       1 main.go:227] handling current node
	I0625 16:30:16.699919       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:30:16.699926       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:30:16.700130       1 main.go:223] Handling node with IPs: map[192.168.39.177:{}]
	I0625 16:30:16.700166       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.3.0/24] 
	I0625 16:30:26.705316       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:30:26.705352       1 main.go:227] handling current node
	I0625 16:30:26.705365       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:30:26.705371       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:30:26.705474       1 main.go:223] Handling node with IPs: map[192.168.39.177:{}]
	I0625 16:30:26.705500       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.3.0/24] 
	I0625 16:30:36.710539       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:30:36.710603       1 main.go:227] handling current node
	I0625 16:30:36.710621       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:30:36.710626       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:30:36.710812       1 main.go:223] Handling node with IPs: map[192.168.39.177:{}]
	I0625 16:30:36.710837       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.3.0/24] 
	I0625 16:30:46.720716       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:30:46.720822       1 main.go:227] handling current node
	I0625 16:30:46.720866       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:30:46.720895       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:30:46.721131       1 main.go:223] Handling node with IPs: map[192.168.39.177:{}]
	I0625 16:30:46.721191       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [dada3d77ec88472cd180091e075101888927a1b93a58d88bd7378fbe100d3045] <==
	I0625 16:27:13.955886       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.3.0/24] 
	I0625 16:27:23.963476       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:27:23.963564       1 main.go:227] handling current node
	I0625 16:27:23.963588       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:27:23.963604       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:27:23.963720       1 main.go:223] Handling node with IPs: map[192.168.39.177:{}]
	I0625 16:27:23.963744       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.3.0/24] 
	I0625 16:27:33.976024       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:27:33.976214       1 main.go:227] handling current node
	I0625 16:27:33.976260       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:27:33.976279       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:27:33.976402       1 main.go:223] Handling node with IPs: map[192.168.39.177:{}]
	I0625 16:27:33.976457       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.3.0/24] 
	I0625 16:27:43.988705       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:27:43.988807       1 main.go:227] handling current node
	I0625 16:27:43.988836       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:27:43.988853       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:27:43.989038       1 main.go:223] Handling node with IPs: map[192.168.39.177:{}]
	I0625 16:27:43.989084       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.3.0/24] 
	I0625 16:27:54.002507       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:27:54.002543       1 main.go:227] handling current node
	I0625 16:27:54.002559       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:27:54.002564       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:27:54.002656       1 main.go:223] Handling node with IPs: map[192.168.39.177:{}]
	I0625 16:27:54.002661       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9bf530af755b58043cd84f310c01986cfe5f2a354d4e6102e40d465ec3a96a81] <==
	I0625 16:29:35.018767       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0625 16:29:35.020931       1 aggregator.go:165] initial CRD sync complete...
	I0625 16:29:35.021017       1 autoregister_controller.go:141] Starting autoregister controller
	I0625 16:29:35.021029       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0625 16:29:35.021035       1 cache.go:39] Caches are synced for autoregister controller
	I0625 16:29:35.061038       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0625 16:29:35.061073       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0625 16:29:35.061407       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0625 16:29:35.065081       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0625 16:29:35.065132       1 policy_source.go:224] refreshing policies
	I0625 16:29:35.065829       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0625 16:29:35.073580       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0625 16:29:35.074021       1 shared_informer.go:320] Caches are synced for configmaps
	I0625 16:29:35.089457       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0625 16:29:35.121940       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0625 16:29:35.146950       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0625 16:29:35.164894       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0625 16:29:35.975895       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0625 16:29:36.774791       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0625 16:29:36.893564       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0625 16:29:36.907475       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0625 16:29:36.964669       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0625 16:29:36.970780       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0625 16:29:47.902120       1 controller.go:615] quota admission added evaluator for: endpoints
	I0625 16:29:47.955810       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [bd920691a329ba6c3778d2ce3bfd1a1d43b9b4ecd0e0ebe6a6dc63bdfbbe887d] <==
	W0625 16:27:54.430415       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.430488       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.430519       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.430551       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.430576       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.430627       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.430652       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431180       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431481       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431523       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431547       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431575       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431587       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431604       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431629       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431655       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431657       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431686       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431701       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431714       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431729       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431746       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431758       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431781       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431794       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [79cd6519b497f35ff1e9ac8c6377ada466699c880f80fd08e64500e8964072a8] <==
	I0625 16:23:46.480070       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0625 16:24:17.626020       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-552402-m02\" does not exist"
	I0625 16:24:17.653783       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-552402-m02" podCIDRs=["10.244.1.0/24"]
	I0625 16:24:21.486130       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-552402-m02"
	I0625 16:24:26.877681       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:24:29.419762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.05429ms"
	I0625 16:24:29.428350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.26433ms"
	I0625 16:24:29.428627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.505µs"
	I0625 16:24:32.663124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.68071ms"
	I0625 16:24:32.663213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.295µs"
	I0625 16:24:33.428241       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.19328ms"
	I0625 16:24:33.428337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.288µs"
	I0625 16:25:00.439645       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-552402-m03\" does not exist"
	I0625 16:25:00.439867       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:25:00.449284       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-552402-m03" podCIDRs=["10.244.2.0/24"]
	I0625 16:25:01.505149       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-552402-m03"
	I0625 16:25:10.262164       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:25:39.022904       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:25:39.958891       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-552402-m03\" does not exist"
	I0625 16:25:39.959451       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:25:39.970211       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-552402-m03" podCIDRs=["10.244.3.0/24"]
	I0625 16:25:48.885683       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:26:31.555902       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:26:31.621517       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.339401ms"
	I0625 16:26:31.622860       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.066µs"
	
	
	==> kube-controller-manager [90c383a4dc6b4ce55513c013b99411811ae775392a7c5c2ecd9c50299edf98bf] <==
	I0625 16:29:48.298318       1 shared_informer.go:320] Caches are synced for garbage collector
	I0625 16:29:48.300634       1 shared_informer.go:320] Caches are synced for garbage collector
	I0625 16:29:48.300684       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0625 16:30:09.707802       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.035089ms"
	I0625 16:30:09.717884       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.954889ms"
	I0625 16:30:09.718466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.307µs"
	I0625 16:30:14.175865       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-552402-m02\" does not exist"
	I0625 16:30:14.180866       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-552402-m02" podCIDRs=["10.244.1.0/24"]
	I0625 16:30:16.055779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.071µs"
	I0625 16:30:16.092618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.188µs"
	I0625 16:30:16.104077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.551µs"
	I0625 16:30:16.112620       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.199µs"
	I0625 16:30:16.120390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.087µs"
	I0625 16:30:16.123637       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.565µs"
	I0625 16:30:18.042106       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.698µs"
	I0625 16:30:23.123509       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:30:23.139670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.444µs"
	I0625 16:30:23.156326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.464µs"
	I0625 16:30:26.382716       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.080468ms"
	I0625 16:30:26.383688       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.892µs"
	I0625 16:30:41.689541       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:30:43.155783       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:30:43.156011       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-552402-m03\" does not exist"
	I0625 16:30:43.168867       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-552402-m03" podCIDRs=["10.244.2.0/24"]
	I0625 16:30:52.002473       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	
	
	==> kube-proxy [8647b618ee7b9b7796293cfccaaa79f29452b9ad19f19bd4bf4f5371f911f3ad] <==
	I0625 16:29:36.066785       1 server_linux.go:69] "Using iptables proxy"
	I0625 16:29:36.078098       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.231"]
	I0625 16:29:36.129029       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0625 16:29:36.129083       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0625 16:29:36.129099       1 server_linux.go:165] "Using iptables Proxier"
	I0625 16:29:36.133564       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0625 16:29:36.133837       1 server.go:872] "Version info" version="v1.30.2"
	I0625 16:29:36.133865       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:29:36.136099       1 config.go:192] "Starting service config controller"
	I0625 16:29:36.136388       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0625 16:29:36.137069       1 config.go:101] "Starting endpoint slice config controller"
	I0625 16:29:36.137183       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0625 16:29:36.139928       1 config.go:319] "Starting node config controller"
	I0625 16:29:36.140058       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0625 16:29:36.237864       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0625 16:29:36.238069       1 shared_informer.go:320] Caches are synced for service config
	I0625 16:29:36.241789       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f74159477c5e02e00dfb27d653217dc9b2d7693cee6730c6af252cf01c5572db] <==
	I0625 16:23:43.026302       1 server_linux.go:69] "Using iptables proxy"
	I0625 16:23:43.038183       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.231"]
	I0625 16:23:43.134484       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0625 16:23:43.134549       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0625 16:23:43.134570       1 server_linux.go:165] "Using iptables Proxier"
	I0625 16:23:43.145931       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0625 16:23:43.146206       1 server.go:872] "Version info" version="v1.30.2"
	I0625 16:23:43.146234       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:23:43.148296       1 config.go:192] "Starting service config controller"
	I0625 16:23:43.148328       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0625 16:23:43.148381       1 config.go:101] "Starting endpoint slice config controller"
	I0625 16:23:43.148386       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0625 16:23:43.149027       1 config.go:319] "Starting node config controller"
	I0625 16:23:43.149053       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0625 16:23:43.248424       1 shared_informer.go:320] Caches are synced for service config
	I0625 16:23:43.248456       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0625 16:23:43.249102       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3410e4d3e33976711626b2e78ed9f2c95d4fab7ae14ffb4db21293db4b1d5d00] <==
	W0625 16:29:35.055856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0625 16:29:35.055923       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0625 16:29:35.056052       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0625 16:29:35.056158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0625 16:29:35.056273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0625 16:29:35.056359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0625 16:29:35.056492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0625 16:29:35.056590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0625 16:29:35.056724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0625 16:29:35.056812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0625 16:29:35.056926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0625 16:29:35.057129       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0625 16:29:35.057159       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0625 16:29:35.057233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0625 16:29:35.059194       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0625 16:29:35.059289       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0625 16:29:35.059416       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0625 16:29:35.059513       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0625 16:29:35.059639       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0625 16:29:35.059737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0625 16:29:35.059861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0625 16:29:35.059929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0625 16:29:35.060144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0625 16:29:35.060231       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0625 16:29:36.542191       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [56b7ee056128dc759220644aa7dc88d47b282cf6f68c6ce88244ec9bef2de09c] <==
	E0625 16:23:25.808420       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0625 16:23:25.807505       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0625 16:23:25.808466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0625 16:23:25.807551       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0625 16:23:25.808512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0625 16:23:26.622021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0625 16:23:26.622125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0625 16:23:26.652129       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0625 16:23:26.652254       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0625 16:23:26.664330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0625 16:23:26.664430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0625 16:23:26.725657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0625 16:23:26.725776       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0625 16:23:26.768567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0625 16:23:26.768608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0625 16:23:26.831237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0625 16:23:26.831283       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0625 16:23:26.869782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0625 16:23:26.869914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0625 16:23:26.925126       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0625 16:23:26.925204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0625 16:23:26.986353       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0625 16:23:26.986728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0625 16:23:28.703349       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0625 16:27:54.398772       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 25 16:29:32 multinode-552402 kubelet[3079]: E0625 16:29:32.313369    3079 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	Jun 25 16:29:32 multinode-552402 kubelet[3079]: I0625 16:29:32.682219    3079 kubelet_node_status.go:73] "Attempting to register node" node="multinode-552402"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.108688    3079 kubelet_node_status.go:112] "Node was previously registered" node="multinode-552402"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.109254    3079 kubelet_node_status.go:76] "Successfully registered node" node="multinode-552402"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.112120    3079 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.114196    3079 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.164538    3079 apiserver.go:52] "Watching apiserver"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.167565    3079 topology_manager.go:215] "Topology Admit Handler" podUID="ea3247d1-08d6-4760-8ba1-62cd6d3b7edb" podNamespace="kube-system" podName="kube-proxy-nphd7"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.167913    3079 topology_manager.go:215] "Topology Admit Handler" podUID="2de38f2c-e56d-43ca-acd6-537a2c8c36c9" podNamespace="kube-system" podName="kindnet-6ctrk"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.169082    3079 topology_manager.go:215] "Topology Admit Handler" podUID="3716b4c2-3417-4d41-8143-decc38ce93aa" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jf2ds"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.169214    3079 topology_manager.go:215] "Topology Admit Handler" podUID="638c610e-b5c1-40b3-8972-fbf36c6f1bf0" podNamespace="kube-system" podName="storage-provisioner"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.169314    3079 topology_manager.go:215] "Topology Admit Handler" podUID="d15691ff-e95d-426b-9545-344419479d75" podNamespace="default" podName="busybox-fc5497c4f-97579"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.169945    3079 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.208947    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2de38f2c-e56d-43ca-acd6-537a2c8c36c9-lib-modules\") pod \"kindnet-6ctrk\" (UID: \"2de38f2c-e56d-43ca-acd6-537a2c8c36c9\") " pod="kube-system/kindnet-6ctrk"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.209120    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2de38f2c-e56d-43ca-acd6-537a2c8c36c9-xtables-lock\") pod \"kindnet-6ctrk\" (UID: \"2de38f2c-e56d-43ca-acd6-537a2c8c36c9\") " pod="kube-system/kindnet-6ctrk"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.209156    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea3247d1-08d6-4760-8ba1-62cd6d3b7edb-lib-modules\") pod \"kube-proxy-nphd7\" (UID: \"ea3247d1-08d6-4760-8ba1-62cd6d3b7edb\") " pod="kube-system/kube-proxy-nphd7"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.209217    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/638c610e-b5c1-40b3-8972-fbf36c6f1bf0-tmp\") pod \"storage-provisioner\" (UID: \"638c610e-b5c1-40b3-8972-fbf36c6f1bf0\") " pod="kube-system/storage-provisioner"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.209302    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea3247d1-08d6-4760-8ba1-62cd6d3b7edb-xtables-lock\") pod \"kube-proxy-nphd7\" (UID: \"ea3247d1-08d6-4760-8ba1-62cd6d3b7edb\") " pod="kube-system/kube-proxy-nphd7"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.209338    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2de38f2c-e56d-43ca-acd6-537a2c8c36c9-cni-cfg\") pod \"kindnet-6ctrk\" (UID: \"2de38f2c-e56d-43ca-acd6-537a2c8c36c9\") " pod="kube-system/kindnet-6ctrk"
	Jun 25 16:29:41 multinode-552402 kubelet[3079]: I0625 16:29:41.081871    3079 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 25 16:30:31 multinode-552402 kubelet[3079]: E0625 16:30:31.264276    3079 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 16:30:31 multinode-552402 kubelet[3079]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:30:31 multinode-552402 kubelet[3079]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:30:31 multinode-552402 kubelet[3079]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:30:31 multinode-552402 kubelet[3079]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0625 16:30:54.455237   55239 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19128-13846/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-552402 -n multinode-552402
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-552402 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (304.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 stop
E0625 16:32:32.177231   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-552402 stop: exit status 82 (2m0.455664093s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-552402-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-552402 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-552402 status: exit status 3 (18.647398604s)

                                                
                                                
-- stdout --
	multinode-552402
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-552402-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0625 16:33:17.590776   55917 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	E0625 16:33:17.590809   55917 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-552402 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-552402 -n multinode-552402
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-552402 logs -n 25: (1.41222102s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-552402 cp multinode-552402-m02:/home/docker/cp-test.txt                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402:/home/docker/cp-test_multinode-552402-m02_multinode-552402.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n multinode-552402 sudo cat                                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | /home/docker/cp-test_multinode-552402-m02_multinode-552402.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-552402 cp multinode-552402-m02:/home/docker/cp-test.txt                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m03:/home/docker/cp-test_multinode-552402-m02_multinode-552402-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n multinode-552402-m03 sudo cat                                   | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | /home/docker/cp-test_multinode-552402-m02_multinode-552402-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-552402 cp testdata/cp-test.txt                                                | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-552402 cp multinode-552402-m03:/home/docker/cp-test.txt                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1179120027/001/cp-test_multinode-552402-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-552402 cp multinode-552402-m03:/home/docker/cp-test.txt                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402:/home/docker/cp-test_multinode-552402-m03_multinode-552402.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n multinode-552402 sudo cat                                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | /home/docker/cp-test_multinode-552402-m03_multinode-552402.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-552402 cp multinode-552402-m03:/home/docker/cp-test.txt                       | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m02:/home/docker/cp-test_multinode-552402-m03_multinode-552402-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n                                                                 | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | multinode-552402-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-552402 ssh -n multinode-552402-m02 sudo cat                                   | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | /home/docker/cp-test_multinode-552402-m03_multinode-552402-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-552402 node stop m03                                                          | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	| node    | multinode-552402 node start                                                             | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC | 25 Jun 24 16:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-552402                                                                | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC |                     |
	| stop    | -p multinode-552402                                                                     | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:25 UTC |                     |
	| start   | -p multinode-552402                                                                     | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:27 UTC | 25 Jun 24 16:30 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-552402                                                                | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:30 UTC |                     |
	| node    | multinode-552402 node delete                                                            | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:30 UTC | 25 Jun 24 16:30 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-552402 stop                                                                   | multinode-552402 | jenkins | v1.33.1 | 25 Jun 24 16:30 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/25 16:27:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0625 16:27:53.537008   54199 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:27:53.537264   54199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:27:53.537274   54199 out.go:304] Setting ErrFile to fd 2...
	I0625 16:27:53.537277   54199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:27:53.537470   54199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:27:53.537944   54199 out.go:298] Setting JSON to false
	I0625 16:27:53.538775   54199 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7818,"bootTime":1719325056,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 16:27:53.538827   54199 start.go:139] virtualization: kvm guest
	I0625 16:27:53.541156   54199 out.go:177] * [multinode-552402] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0625 16:27:53.542602   54199 out.go:177]   - MINIKUBE_LOCATION=19128
	I0625 16:27:53.542604   54199 notify.go:220] Checking for updates...
	I0625 16:27:53.543974   54199 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 16:27:53.545417   54199 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 16:27:53.546821   54199 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:27:53.548369   54199 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0625 16:27:53.549655   54199 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0625 16:27:53.551198   54199 config.go:182] Loaded profile config "multinode-552402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:27:53.551292   54199 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 16:27:53.551820   54199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:27:53.551899   54199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:27:53.566642   54199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
	I0625 16:27:53.567031   54199 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:27:53.567534   54199 main.go:141] libmachine: Using API Version  1
	I0625 16:27:53.567550   54199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:27:53.567883   54199 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:27:53.568066   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:27:53.601929   54199 out.go:177] * Using the kvm2 driver based on existing profile
	I0625 16:27:53.603111   54199 start.go:297] selected driver: kvm2
	I0625 16:27:53.603131   54199 start.go:901] validating driver "kvm2" against &{Name:multinode-552402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-552402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.177 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:27:53.603290   54199 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0625 16:27:53.603650   54199 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:27:53.603739   54199 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19128-13846/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0625 16:27:53.617998   54199 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0625 16:27:53.618656   54199 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 16:27:53.618680   54199 cni.go:84] Creating CNI manager for ""
	I0625 16:27:53.618688   54199 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0625 16:27:53.618754   54199 start.go:340] cluster config:
	{Name:multinode-552402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-552402 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.177 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:27:53.618890   54199 iso.go:125] acquiring lock: {Name:mk76df652d5e768afc73443035d5ecb8b75ed16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:27:53.620615   54199 out.go:177] * Starting "multinode-552402" primary control-plane node in "multinode-552402" cluster
	I0625 16:27:53.621862   54199 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 16:27:53.621889   54199 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0625 16:27:53.621901   54199 cache.go:56] Caching tarball of preloaded images
	I0625 16:27:53.621981   54199 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 16:27:53.621995   54199 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0625 16:27:53.622127   54199 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/config.json ...
	I0625 16:27:53.622341   54199 start.go:360] acquireMachinesLock for multinode-552402: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 16:27:53.622393   54199 start.go:364] duration metric: took 28.95µs to acquireMachinesLock for "multinode-552402"
	I0625 16:27:53.622412   54199 start.go:96] Skipping create...Using existing machine configuration
	I0625 16:27:53.622421   54199 fix.go:54] fixHost starting: 
	I0625 16:27:53.622729   54199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:27:53.622758   54199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:27:53.636140   54199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0625 16:27:53.636552   54199 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:27:53.636993   54199 main.go:141] libmachine: Using API Version  1
	I0625 16:27:53.637006   54199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:27:53.637268   54199 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:27:53.637449   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:27:53.637559   54199 main.go:141] libmachine: (multinode-552402) Calling .GetState
	I0625 16:27:53.639040   54199 fix.go:112] recreateIfNeeded on multinode-552402: state=Running err=<nil>
	W0625 16:27:53.639067   54199 fix.go:138] unexpected machine state, will restart: <nil>
	I0625 16:27:53.641601   54199 out.go:177] * Updating the running kvm2 "multinode-552402" VM ...
	I0625 16:27:53.642919   54199 machine.go:94] provisionDockerMachine start ...
	I0625 16:27:53.642941   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:27:53.643136   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:27:53.645571   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:53.646017   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:27:53.646049   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:53.646192   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:27:53.646359   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:53.646513   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:53.646644   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:27:53.646793   54199 main.go:141] libmachine: Using SSH client type: native
	I0625 16:27:53.647038   54199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0625 16:27:53.647054   54199 main.go:141] libmachine: About to run SSH command:
	hostname
	I0625 16:27:53.759653   54199 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-552402
	
	I0625 16:27:53.759680   54199 main.go:141] libmachine: (multinode-552402) Calling .GetMachineName
	I0625 16:27:53.759902   54199 buildroot.go:166] provisioning hostname "multinode-552402"
	I0625 16:27:53.759924   54199 main.go:141] libmachine: (multinode-552402) Calling .GetMachineName
	I0625 16:27:53.760089   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:27:53.762561   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:53.763003   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:27:53.763033   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:53.763153   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:27:53.763330   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:53.763468   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:53.763609   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:27:53.763771   54199 main.go:141] libmachine: Using SSH client type: native
	I0625 16:27:53.763970   54199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0625 16:27:53.763983   54199 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-552402 && echo "multinode-552402" | sudo tee /etc/hostname
	I0625 16:27:53.890422   54199 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-552402
	
	I0625 16:27:53.890451   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:27:53.893359   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:53.893695   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:27:53.893733   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:53.893896   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:27:53.894123   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:53.894283   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:53.894449   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:27:53.894594   54199 main.go:141] libmachine: Using SSH client type: native
	I0625 16:27:53.894769   54199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0625 16:27:53.894792   54199 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-552402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-552402/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-552402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 16:27:54.003658   54199 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 16:27:54.003697   54199 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 16:27:54.003723   54199 buildroot.go:174] setting up certificates
	I0625 16:27:54.003736   54199 provision.go:84] configureAuth start
	I0625 16:27:54.003750   54199 main.go:141] libmachine: (multinode-552402) Calling .GetMachineName
	I0625 16:27:54.003989   54199 main.go:141] libmachine: (multinode-552402) Calling .GetIP
	I0625 16:27:54.006804   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.007181   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:27:54.007211   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.007378   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:27:54.009388   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.009702   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:27:54.009729   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.009837   54199 provision.go:143] copyHostCerts
	I0625 16:27:54.009865   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 16:27:54.009903   54199 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem, removing ...
	I0625 16:27:54.009912   54199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 16:27:54.009975   54199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 16:27:54.010080   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 16:27:54.010098   54199 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem, removing ...
	I0625 16:27:54.010103   54199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 16:27:54.010130   54199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 16:27:54.010222   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 16:27:54.010242   54199 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem, removing ...
	I0625 16:27:54.010250   54199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 16:27:54.010273   54199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 16:27:54.010334   54199 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.multinode-552402 san=[127.0.0.1 192.168.39.231 localhost minikube multinode-552402]
	I0625 16:27:54.108999   54199 provision.go:177] copyRemoteCerts
	I0625 16:27:54.109050   54199 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 16:27:54.109072   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:27:54.111627   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.111975   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:27:54.112014   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.112136   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:27:54.112305   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:54.112445   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:27:54.112567   54199 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/multinode-552402/id_rsa Username:docker}
	I0625 16:27:54.196809   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0625 16:27:54.196893   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0625 16:27:54.221770   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0625 16:27:54.221817   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0625 16:27:54.245559   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0625 16:27:54.245626   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 16:27:54.269177   54199 provision.go:87] duration metric: took 265.426707ms to configureAuth
	I0625 16:27:54.269205   54199 buildroot.go:189] setting minikube options for container-runtime
	I0625 16:27:54.269412   54199 config.go:182] Loaded profile config "multinode-552402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:27:54.269476   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:27:54.272193   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.272627   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:27:54.272655   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:27:54.272842   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:27:54.272985   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:54.273157   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:27:54.273334   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:27:54.273454   54199 main.go:141] libmachine: Using SSH client type: native
	I0625 16:27:54.273596   54199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0625 16:27:54.273609   54199 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 16:29:25.062515   54199 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 16:29:25.062546   54199 machine.go:97] duration metric: took 1m31.419612639s to provisionDockerMachine
	I0625 16:29:25.062558   54199 start.go:293] postStartSetup for "multinode-552402" (driver="kvm2")
	I0625 16:29:25.062569   54199 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 16:29:25.062584   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:29:25.062926   54199 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 16:29:25.062964   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:29:25.065780   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.066307   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:29:25.066336   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.066463   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:29:25.066660   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:29:25.066820   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:29:25.066956   54199 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/multinode-552402/id_rsa Username:docker}
	I0625 16:29:25.153941   54199 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 16:29:25.158216   54199 command_runner.go:130] > NAME=Buildroot
	I0625 16:29:25.158236   54199 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0625 16:29:25.158240   54199 command_runner.go:130] > ID=buildroot
	I0625 16:29:25.158245   54199 command_runner.go:130] > VERSION_ID=2023.02.9
	I0625 16:29:25.158250   54199 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0625 16:29:25.158277   54199 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 16:29:25.158286   54199 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 16:29:25.158339   54199 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 16:29:25.158424   54199 filesync.go:149] local asset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> 212392.pem in /etc/ssl/certs
	I0625 16:29:25.158436   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /etc/ssl/certs/212392.pem
	I0625 16:29:25.158554   54199 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0625 16:29:25.167710   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:29:25.192683   54199 start.go:296] duration metric: took 130.112778ms for postStartSetup
	I0625 16:29:25.192723   54199 fix.go:56] duration metric: took 1m31.570301433s for fixHost
	I0625 16:29:25.192771   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:29:25.195558   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.195974   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:29:25.196003   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.196157   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:29:25.196388   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:29:25.196565   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:29:25.196725   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:29:25.196934   54199 main.go:141] libmachine: Using SSH client type: native
	I0625 16:29:25.197095   54199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0625 16:29:25.197106   54199 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0625 16:29:25.303367   54199 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719332965.283343000
	
	I0625 16:29:25.303387   54199 fix.go:216] guest clock: 1719332965.283343000
	I0625 16:29:25.303393   54199 fix.go:229] Guest: 2024-06-25 16:29:25.283343 +0000 UTC Remote: 2024-06-25 16:29:25.192728326 +0000 UTC m=+91.687898674 (delta=90.614674ms)
	I0625 16:29:25.303414   54199 fix.go:200] guest clock delta is within tolerance: 90.614674ms
	I0625 16:29:25.303422   54199 start.go:83] releasing machines lock for "multinode-552402", held for 1m31.681017187s
	I0625 16:29:25.303446   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:29:25.303740   54199 main.go:141] libmachine: (multinode-552402) Calling .GetIP
	I0625 16:29:25.306119   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.306415   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:29:25.306442   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.306597   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:29:25.307128   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:29:25.307310   54199 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:29:25.307380   54199 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 16:29:25.307430   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:29:25.307516   54199 ssh_runner.go:195] Run: cat /version.json
	I0625 16:29:25.307535   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:29:25.310079   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.310316   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.310437   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:29:25.310479   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.310623   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:29:25.310663   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:29:25.310692   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:25.310829   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:29:25.310847   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:29:25.311041   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:29:25.311083   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:29:25.311172   54199 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/multinode-552402/id_rsa Username:docker}
	I0625 16:29:25.311227   54199 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:29:25.311371   54199 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/multinode-552402/id_rsa Username:docker}
	I0625 16:29:25.391251   54199 command_runner.go:130] > {"iso_version": "v1.33.1-1719245461-19128", "kicbase_version": "v0.0.44-1719002606-19116", "minikube_version": "v1.33.1", "commit": "a360798964ab8cf5f737423b2567c84f01731264"}
	I0625 16:29:25.418157   54199 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0625 16:29:25.418988   54199 ssh_runner.go:195] Run: systemctl --version
	I0625 16:29:25.425110   54199 command_runner.go:130] > systemd 252 (252)
	I0625 16:29:25.425141   54199 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0625 16:29:25.425199   54199 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 16:29:25.582338   54199 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0625 16:29:25.589540   54199 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0625 16:29:25.589834   54199 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 16:29:25.589902   54199 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 16:29:25.599160   54199 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0625 16:29:25.599180   54199 start.go:494] detecting cgroup driver to use...
	I0625 16:29:25.599238   54199 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 16:29:25.614989   54199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 16:29:25.629327   54199 docker.go:217] disabling cri-docker service (if available) ...
	I0625 16:29:25.629383   54199 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 16:29:25.642491   54199 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 16:29:25.655611   54199 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 16:29:25.809523   54199 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 16:29:25.945961   54199 docker.go:233] disabling docker service ...
	I0625 16:29:25.946034   54199 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 16:29:25.962962   54199 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 16:29:25.976994   54199 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 16:29:26.112301   54199 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 16:29:26.253486   54199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 16:29:26.267474   54199 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 16:29:26.285918   54199 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0625 16:29:26.285956   54199 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0625 16:29:26.286020   54199 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:29:26.297299   54199 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 16:29:26.297354   54199 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:29:26.308733   54199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:29:26.319454   54199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:29:26.330382   54199 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 16:29:26.341188   54199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:29:26.351915   54199 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:29:26.362406   54199 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:29:26.373022   54199 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 16:29:26.382529   54199 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0625 16:29:26.382577   54199 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 16:29:26.392650   54199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:29:26.531883   54199 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 16:29:28.582018   54199 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.050096893s)
	I0625 16:29:28.582062   54199 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 16:29:28.582103   54199 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 16:29:28.587157   54199 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0625 16:29:28.587185   54199 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0625 16:29:28.587197   54199 command_runner.go:130] > Device: 0,22	Inode: 1326        Links: 1
	I0625 16:29:28.587212   54199 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0625 16:29:28.587223   54199 command_runner.go:130] > Access: 2024-06-25 16:29:28.465516283 +0000
	I0625 16:29:28.587235   54199 command_runner.go:130] > Modify: 2024-06-25 16:29:28.465516283 +0000
	I0625 16:29:28.587247   54199 command_runner.go:130] > Change: 2024-06-25 16:29:28.465516283 +0000
	I0625 16:29:28.587256   54199 command_runner.go:130] >  Birth: -
	I0625 16:29:28.587280   54199 start.go:562] Will wait 60s for crictl version
	I0625 16:29:28.587321   54199 ssh_runner.go:195] Run: which crictl
	I0625 16:29:28.591243   54199 command_runner.go:130] > /usr/bin/crictl
	I0625 16:29:28.591299   54199 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 16:29:28.629070   54199 command_runner.go:130] > Version:  0.1.0
	I0625 16:29:28.629086   54199 command_runner.go:130] > RuntimeName:  cri-o
	I0625 16:29:28.629091   54199 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0625 16:29:28.629097   54199 command_runner.go:130] > RuntimeApiVersion:  v1
	I0625 16:29:28.630138   54199 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 16:29:28.630224   54199 ssh_runner.go:195] Run: crio --version
	I0625 16:29:28.657653   54199 command_runner.go:130] > crio version 1.29.1
	I0625 16:29:28.657675   54199 command_runner.go:130] > Version:        1.29.1
	I0625 16:29:28.657684   54199 command_runner.go:130] > GitCommit:      unknown
	I0625 16:29:28.657691   54199 command_runner.go:130] > GitCommitDate:  unknown
	I0625 16:29:28.657698   54199 command_runner.go:130] > GitTreeState:   clean
	I0625 16:29:28.657707   54199 command_runner.go:130] > BuildDate:      2024-06-24T21:45:48Z
	I0625 16:29:28.657714   54199 command_runner.go:130] > GoVersion:      go1.21.6
	I0625 16:29:28.657720   54199 command_runner.go:130] > Compiler:       gc
	I0625 16:29:28.657729   54199 command_runner.go:130] > Platform:       linux/amd64
	I0625 16:29:28.657736   54199 command_runner.go:130] > Linkmode:       dynamic
	I0625 16:29:28.657763   54199 command_runner.go:130] > BuildTags:      
	I0625 16:29:28.657775   54199 command_runner.go:130] >   containers_image_ostree_stub
	I0625 16:29:28.657782   54199 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0625 16:29:28.657789   54199 command_runner.go:130] >   btrfs_noversion
	I0625 16:29:28.657799   54199 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0625 16:29:28.657808   54199 command_runner.go:130] >   libdm_no_deferred_remove
	I0625 16:29:28.657817   54199 command_runner.go:130] >   seccomp
	I0625 16:29:28.657827   54199 command_runner.go:130] > LDFlags:          unknown
	I0625 16:29:28.657839   54199 command_runner.go:130] > SeccompEnabled:   true
	I0625 16:29:28.657845   54199 command_runner.go:130] > AppArmorEnabled:  false
	I0625 16:29:28.657918   54199 ssh_runner.go:195] Run: crio --version
	I0625 16:29:28.684341   54199 command_runner.go:130] > crio version 1.29.1
	I0625 16:29:28.684364   54199 command_runner.go:130] > Version:        1.29.1
	I0625 16:29:28.684369   54199 command_runner.go:130] > GitCommit:      unknown
	I0625 16:29:28.684373   54199 command_runner.go:130] > GitCommitDate:  unknown
	I0625 16:29:28.684378   54199 command_runner.go:130] > GitTreeState:   clean
	I0625 16:29:28.684383   54199 command_runner.go:130] > BuildDate:      2024-06-24T21:45:48Z
	I0625 16:29:28.684387   54199 command_runner.go:130] > GoVersion:      go1.21.6
	I0625 16:29:28.684391   54199 command_runner.go:130] > Compiler:       gc
	I0625 16:29:28.684395   54199 command_runner.go:130] > Platform:       linux/amd64
	I0625 16:29:28.684399   54199 command_runner.go:130] > Linkmode:       dynamic
	I0625 16:29:28.684403   54199 command_runner.go:130] > BuildTags:      
	I0625 16:29:28.684407   54199 command_runner.go:130] >   containers_image_ostree_stub
	I0625 16:29:28.684412   54199 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0625 16:29:28.684416   54199 command_runner.go:130] >   btrfs_noversion
	I0625 16:29:28.684420   54199 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0625 16:29:28.684424   54199 command_runner.go:130] >   libdm_no_deferred_remove
	I0625 16:29:28.684427   54199 command_runner.go:130] >   seccomp
	I0625 16:29:28.684431   54199 command_runner.go:130] > LDFlags:          unknown
	I0625 16:29:28.684435   54199 command_runner.go:130] > SeccompEnabled:   true
	I0625 16:29:28.684439   54199 command_runner.go:130] > AppArmorEnabled:  false
	I0625 16:29:28.687372   54199 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0625 16:29:28.688550   54199 main.go:141] libmachine: (multinode-552402) Calling .GetIP
	I0625 16:29:28.690939   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:28.691228   54199 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:29:28.691259   54199 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:29:28.691492   54199 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0625 16:29:28.695753   54199 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0625 16:29:28.695839   54199 kubeadm.go:877] updating cluster {Name:multinode-552402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-552402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.177 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0625 16:29:28.695994   54199 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 16:29:28.696037   54199 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:29:28.747797   54199 command_runner.go:130] > {
	I0625 16:29:28.747817   54199 command_runner.go:130] >   "images": [
	I0625 16:29:28.747822   54199 command_runner.go:130] >     {
	I0625 16:29:28.747843   54199 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0625 16:29:28.747848   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.747854   54199 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0625 16:29:28.747857   54199 command_runner.go:130] >       ],
	I0625 16:29:28.747861   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.747869   54199 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0625 16:29:28.747880   54199 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0625 16:29:28.747883   54199 command_runner.go:130] >       ],
	I0625 16:29:28.747888   54199 command_runner.go:130] >       "size": "65908273",
	I0625 16:29:28.747893   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.747897   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.747904   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.747908   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.747915   54199 command_runner.go:130] >     },
	I0625 16:29:28.747918   54199 command_runner.go:130] >     {
	I0625 16:29:28.747924   54199 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0625 16:29:28.747930   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.747993   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0625 16:29:28.748015   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748022   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748034   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0625 16:29:28.748047   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0625 16:29:28.748056   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748063   54199 command_runner.go:130] >       "size": "1363676",
	I0625 16:29:28.748072   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.748083   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748093   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748102   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748111   54199 command_runner.go:130] >     },
	I0625 16:29:28.748116   54199 command_runner.go:130] >     {
	I0625 16:29:28.748128   54199 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0625 16:29:28.748134   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748144   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0625 16:29:28.748153   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748160   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748175   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0625 16:29:28.748190   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0625 16:29:28.748199   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748208   54199 command_runner.go:130] >       "size": "31470524",
	I0625 16:29:28.748217   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.748223   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748230   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748239   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748245   54199 command_runner.go:130] >     },
	I0625 16:29:28.748253   54199 command_runner.go:130] >     {
	I0625 16:29:28.748263   54199 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0625 16:29:28.748277   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748291   54199 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0625 16:29:28.748300   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748305   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748314   54199 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0625 16:29:28.748327   54199 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0625 16:29:28.748334   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748338   54199 command_runner.go:130] >       "size": "61245718",
	I0625 16:29:28.748342   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.748346   54199 command_runner.go:130] >       "username": "nonroot",
	I0625 16:29:28.748350   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748356   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748361   54199 command_runner.go:130] >     },
	I0625 16:29:28.748365   54199 command_runner.go:130] >     {
	I0625 16:29:28.748373   54199 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0625 16:29:28.748379   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748384   54199 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0625 16:29:28.748390   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748393   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748400   54199 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0625 16:29:28.748410   54199 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0625 16:29:28.748415   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748422   54199 command_runner.go:130] >       "size": "150779692",
	I0625 16:29:28.748425   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.748432   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.748435   54199 command_runner.go:130] >       },
	I0625 16:29:28.748440   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748445   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748449   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748453   54199 command_runner.go:130] >     },
	I0625 16:29:28.748458   54199 command_runner.go:130] >     {
	I0625 16:29:28.748464   54199 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0625 16:29:28.748470   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748474   54199 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0625 16:29:28.748478   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748483   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748492   54199 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0625 16:29:28.748500   54199 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0625 16:29:28.748505   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748509   54199 command_runner.go:130] >       "size": "117609954",
	I0625 16:29:28.748516   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.748519   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.748522   54199 command_runner.go:130] >       },
	I0625 16:29:28.748526   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748530   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748534   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748538   54199 command_runner.go:130] >     },
	I0625 16:29:28.748541   54199 command_runner.go:130] >     {
	I0625 16:29:28.748546   54199 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0625 16:29:28.748553   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748558   54199 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0625 16:29:28.748563   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748567   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748575   54199 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0625 16:29:28.748585   54199 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0625 16:29:28.748591   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748595   54199 command_runner.go:130] >       "size": "112194888",
	I0625 16:29:28.748601   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.748605   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.748608   54199 command_runner.go:130] >       },
	I0625 16:29:28.748614   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748618   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748622   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748625   54199 command_runner.go:130] >     },
	I0625 16:29:28.748628   54199 command_runner.go:130] >     {
	I0625 16:29:28.748636   54199 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0625 16:29:28.748639   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748644   54199 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0625 16:29:28.748651   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748654   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748667   54199 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0625 16:29:28.748675   54199 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0625 16:29:28.748679   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748695   54199 command_runner.go:130] >       "size": "85953433",
	I0625 16:29:28.748702   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.748706   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748710   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748714   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748716   54199 command_runner.go:130] >     },
	I0625 16:29:28.748722   54199 command_runner.go:130] >     {
	I0625 16:29:28.748731   54199 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0625 16:29:28.748738   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748745   54199 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0625 16:29:28.748751   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748758   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748768   54199 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0625 16:29:28.748780   54199 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0625 16:29:28.748785   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748791   54199 command_runner.go:130] >       "size": "63051080",
	I0625 16:29:28.748797   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.748804   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.748809   54199 command_runner.go:130] >       },
	I0625 16:29:28.748815   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748820   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748829   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.748835   54199 command_runner.go:130] >     },
	I0625 16:29:28.748844   54199 command_runner.go:130] >     {
	I0625 16:29:28.748851   54199 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0625 16:29:28.748858   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.748862   54199 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0625 16:29:28.748867   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748871   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.748880   54199 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0625 16:29:28.748887   54199 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0625 16:29:28.748893   54199 command_runner.go:130] >       ],
	I0625 16:29:28.748897   54199 command_runner.go:130] >       "size": "750414",
	I0625 16:29:28.748900   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.748904   54199 command_runner.go:130] >         "value": "65535"
	I0625 16:29:28.748907   54199 command_runner.go:130] >       },
	I0625 16:29:28.748917   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.748923   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.748927   54199 command_runner.go:130] >       "pinned": true
	I0625 16:29:28.748931   54199 command_runner.go:130] >     }
	I0625 16:29:28.748934   54199 command_runner.go:130] >   ]
	I0625 16:29:28.748937   54199 command_runner.go:130] > }
	I0625 16:29:28.749105   54199 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 16:29:28.749117   54199 crio.go:433] Images already preloaded, skipping extraction
	I0625 16:29:28.749166   54199 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:29:28.781665   54199 command_runner.go:130] > {
	I0625 16:29:28.781689   54199 command_runner.go:130] >   "images": [
	I0625 16:29:28.781695   54199 command_runner.go:130] >     {
	I0625 16:29:28.781709   54199 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0625 16:29:28.781717   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.781729   54199 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0625 16:29:28.781734   54199 command_runner.go:130] >       ],
	I0625 16:29:28.781745   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.781759   54199 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0625 16:29:28.781774   54199 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0625 16:29:28.781781   54199 command_runner.go:130] >       ],
	I0625 16:29:28.781789   54199 command_runner.go:130] >       "size": "65908273",
	I0625 16:29:28.781799   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.781807   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.781822   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.781833   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.781839   54199 command_runner.go:130] >     },
	I0625 16:29:28.781848   54199 command_runner.go:130] >     {
	I0625 16:29:28.781859   54199 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0625 16:29:28.781870   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.781881   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0625 16:29:28.781890   54199 command_runner.go:130] >       ],
	I0625 16:29:28.781901   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.781916   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0625 16:29:28.781931   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0625 16:29:28.781940   54199 command_runner.go:130] >       ],
	I0625 16:29:28.781947   54199 command_runner.go:130] >       "size": "1363676",
	I0625 16:29:28.781956   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.781966   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.781982   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.781992   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.782001   54199 command_runner.go:130] >     },
	I0625 16:29:28.782009   54199 command_runner.go:130] >     {
	I0625 16:29:28.782020   54199 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0625 16:29:28.782030   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.782039   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0625 16:29:28.782048   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782055   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.782070   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0625 16:29:28.782084   54199 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0625 16:29:28.782092   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782099   54199 command_runner.go:130] >       "size": "31470524",
	I0625 16:29:28.782108   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.782118   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.782125   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.782136   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.782142   54199 command_runner.go:130] >     },
	I0625 16:29:28.782150   54199 command_runner.go:130] >     {
	I0625 16:29:28.782161   54199 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0625 16:29:28.782172   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.782180   54199 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0625 16:29:28.782189   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782196   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.782211   54199 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0625 16:29:28.782234   54199 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0625 16:29:28.782243   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782250   54199 command_runner.go:130] >       "size": "61245718",
	I0625 16:29:28.782256   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.782263   54199 command_runner.go:130] >       "username": "nonroot",
	I0625 16:29:28.782271   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.782277   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.782285   54199 command_runner.go:130] >     },
	I0625 16:29:28.782290   54199 command_runner.go:130] >     {
	I0625 16:29:28.782301   54199 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0625 16:29:28.782308   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.782324   54199 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0625 16:29:28.782333   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782339   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.782362   54199 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0625 16:29:28.782376   54199 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0625 16:29:28.782381   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782387   54199 command_runner.go:130] >       "size": "150779692",
	I0625 16:29:28.782395   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.782401   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.782409   54199 command_runner.go:130] >       },
	I0625 16:29:28.782416   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.782425   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.782431   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.782440   54199 command_runner.go:130] >     },
	I0625 16:29:28.782444   54199 command_runner.go:130] >     {
	I0625 16:29:28.782455   54199 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0625 16:29:28.782464   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.782496   54199 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0625 16:29:28.782505   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782511   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.782525   54199 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0625 16:29:28.782540   54199 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0625 16:29:28.782549   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782555   54199 command_runner.go:130] >       "size": "117609954",
	I0625 16:29:28.782565   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.782572   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.782580   54199 command_runner.go:130] >       },
	I0625 16:29:28.782587   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.782596   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.782601   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.782610   54199 command_runner.go:130] >     },
	I0625 16:29:28.782616   54199 command_runner.go:130] >     {
	I0625 16:29:28.782628   54199 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0625 16:29:28.782650   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.782661   54199 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0625 16:29:28.782666   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782682   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.782698   54199 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0625 16:29:28.782715   54199 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0625 16:29:28.782724   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782730   54199 command_runner.go:130] >       "size": "112194888",
	I0625 16:29:28.782739   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.782746   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.782754   54199 command_runner.go:130] >       },
	I0625 16:29:28.782760   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.782766   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.782775   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.782781   54199 command_runner.go:130] >     },
	I0625 16:29:28.782789   54199 command_runner.go:130] >     {
	I0625 16:29:28.782798   54199 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0625 16:29:28.782806   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.782813   54199 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0625 16:29:28.782820   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782826   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.782862   54199 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0625 16:29:28.782879   54199 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0625 16:29:28.782884   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782890   54199 command_runner.go:130] >       "size": "85953433",
	I0625 16:29:28.782897   54199 command_runner.go:130] >       "uid": null,
	I0625 16:29:28.782904   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.782913   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.782918   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.782927   54199 command_runner.go:130] >     },
	I0625 16:29:28.782932   54199 command_runner.go:130] >     {
	I0625 16:29:28.782944   54199 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0625 16:29:28.782953   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.782964   54199 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0625 16:29:28.782973   54199 command_runner.go:130] >       ],
	I0625 16:29:28.782980   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.782993   54199 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0625 16:29:28.783007   54199 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0625 16:29:28.783016   54199 command_runner.go:130] >       ],
	I0625 16:29:28.783031   54199 command_runner.go:130] >       "size": "63051080",
	I0625 16:29:28.783042   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.783047   54199 command_runner.go:130] >         "value": "0"
	I0625 16:29:28.783055   54199 command_runner.go:130] >       },
	I0625 16:29:28.783061   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.783070   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.783076   54199 command_runner.go:130] >       "pinned": false
	I0625 16:29:28.783081   54199 command_runner.go:130] >     },
	I0625 16:29:28.783090   54199 command_runner.go:130] >     {
	I0625 16:29:28.783101   54199 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0625 16:29:28.783109   54199 command_runner.go:130] >       "repoTags": [
	I0625 16:29:28.783116   54199 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0625 16:29:28.783124   54199 command_runner.go:130] >       ],
	I0625 16:29:28.783130   54199 command_runner.go:130] >       "repoDigests": [
	I0625 16:29:28.783144   54199 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0625 16:29:28.783157   54199 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0625 16:29:28.783165   54199 command_runner.go:130] >       ],
	I0625 16:29:28.783169   54199 command_runner.go:130] >       "size": "750414",
	I0625 16:29:28.783173   54199 command_runner.go:130] >       "uid": {
	I0625 16:29:28.783177   54199 command_runner.go:130] >         "value": "65535"
	I0625 16:29:28.783181   54199 command_runner.go:130] >       },
	I0625 16:29:28.783185   54199 command_runner.go:130] >       "username": "",
	I0625 16:29:28.783189   54199 command_runner.go:130] >       "spec": null,
	I0625 16:29:28.783193   54199 command_runner.go:130] >       "pinned": true
	I0625 16:29:28.783196   54199 command_runner.go:130] >     }
	I0625 16:29:28.783200   54199 command_runner.go:130] >   ]
	I0625 16:29:28.783203   54199 command_runner.go:130] > }
	I0625 16:29:28.783539   54199 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 16:29:28.783561   54199 cache_images.go:84] Images are preloaded, skipping loading
	I0625 16:29:28.783570   54199 kubeadm.go:928] updating node { 192.168.39.231 8443 v1.30.2 crio true true} ...
	I0625 16:29:28.783678   54199 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-552402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-552402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 16:29:28.783753   54199 ssh_runner.go:195] Run: crio config
	I0625 16:29:28.815125   54199 command_runner.go:130] ! time="2024-06-25 16:29:28.795053671Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0625 16:29:28.821643   54199 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0625 16:29:28.827409   54199 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0625 16:29:28.827427   54199 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0625 16:29:28.827434   54199 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0625 16:29:28.827437   54199 command_runner.go:130] > #
	I0625 16:29:28.827451   54199 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0625 16:29:28.827460   54199 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0625 16:29:28.827472   54199 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0625 16:29:28.827487   54199 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0625 16:29:28.827497   54199 command_runner.go:130] > # reload'.
	I0625 16:29:28.827503   54199 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0625 16:29:28.827509   54199 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0625 16:29:28.827515   54199 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0625 16:29:28.827521   54199 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0625 16:29:28.827526   54199 command_runner.go:130] > [crio]
	I0625 16:29:28.827531   54199 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0625 16:29:28.827539   54199 command_runner.go:130] > # containers images, in this directory.
	I0625 16:29:28.827544   54199 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0625 16:29:28.827562   54199 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0625 16:29:28.827571   54199 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0625 16:29:28.827586   54199 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0625 16:29:28.827597   54199 command_runner.go:130] > # imagestore = ""
	I0625 16:29:28.827608   54199 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0625 16:29:28.827621   54199 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0625 16:29:28.827626   54199 command_runner.go:130] > storage_driver = "overlay"
	I0625 16:29:28.827631   54199 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0625 16:29:28.827638   54199 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0625 16:29:28.827644   54199 command_runner.go:130] > storage_option = [
	I0625 16:29:28.827652   54199 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0625 16:29:28.827658   54199 command_runner.go:130] > ]
	I0625 16:29:28.827674   54199 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0625 16:29:28.827687   54199 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0625 16:29:28.827698   54199 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0625 16:29:28.827709   54199 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0625 16:29:28.827715   54199 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0625 16:29:28.827719   54199 command_runner.go:130] > # always happen on a node reboot
	I0625 16:29:28.827725   54199 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0625 16:29:28.827737   54199 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0625 16:29:28.827747   54199 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0625 16:29:28.827759   54199 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0625 16:29:28.827767   54199 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0625 16:29:28.827782   54199 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0625 16:29:28.827797   54199 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0625 16:29:28.827804   54199 command_runner.go:130] > # internal_wipe = true
	I0625 16:29:28.827814   54199 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0625 16:29:28.827827   54199 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0625 16:29:28.827838   54199 command_runner.go:130] > # internal_repair = false
	I0625 16:29:28.827850   54199 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0625 16:29:28.827862   54199 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0625 16:29:28.827874   54199 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0625 16:29:28.827885   54199 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0625 16:29:28.827893   54199 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0625 16:29:28.827902   54199 command_runner.go:130] > [crio.api]
	I0625 16:29:28.827914   54199 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0625 16:29:28.827925   54199 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0625 16:29:28.827936   54199 command_runner.go:130] > # IP address on which the stream server will listen.
	I0625 16:29:28.827946   54199 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0625 16:29:28.827957   54199 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0625 16:29:28.827968   54199 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0625 16:29:28.827973   54199 command_runner.go:130] > # stream_port = "0"
	I0625 16:29:28.827981   54199 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0625 16:29:28.827987   54199 command_runner.go:130] > # stream_enable_tls = false
	I0625 16:29:28.828001   54199 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0625 16:29:28.828011   54199 command_runner.go:130] > # stream_idle_timeout = ""
	I0625 16:29:28.828024   54199 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0625 16:29:28.828037   54199 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0625 16:29:28.828043   54199 command_runner.go:130] > # minutes.
	I0625 16:29:28.828049   54199 command_runner.go:130] > # stream_tls_cert = ""
	I0625 16:29:28.828063   54199 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0625 16:29:28.828074   54199 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0625 16:29:28.828086   54199 command_runner.go:130] > # stream_tls_key = ""
	I0625 16:29:28.828099   54199 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0625 16:29:28.828111   54199 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0625 16:29:28.828130   54199 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0625 16:29:28.828140   54199 command_runner.go:130] > # stream_tls_ca = ""
	I0625 16:29:28.828151   54199 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0625 16:29:28.828161   54199 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0625 16:29:28.828177   54199 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0625 16:29:28.828187   54199 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0625 16:29:28.828201   54199 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0625 16:29:28.828213   54199 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0625 16:29:28.828222   54199 command_runner.go:130] > [crio.runtime]
	I0625 16:29:28.828233   54199 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0625 16:29:28.828264   54199 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0625 16:29:28.828277   54199 command_runner.go:130] > # "nofile=1024:2048"
	I0625 16:29:28.828290   54199 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0625 16:29:28.828300   54199 command_runner.go:130] > # default_ulimits = [
	I0625 16:29:28.828306   54199 command_runner.go:130] > # ]
	I0625 16:29:28.828319   54199 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0625 16:29:28.828326   54199 command_runner.go:130] > # no_pivot = false
	I0625 16:29:28.828334   54199 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0625 16:29:28.828348   54199 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0625 16:29:28.828365   54199 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0625 16:29:28.828377   54199 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0625 16:29:28.828389   54199 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0625 16:29:28.828403   54199 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0625 16:29:28.828411   54199 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0625 16:29:28.828416   54199 command_runner.go:130] > # Cgroup setting for conmon
	I0625 16:29:28.828431   54199 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0625 16:29:28.828441   54199 command_runner.go:130] > conmon_cgroup = "pod"
	I0625 16:29:28.828454   54199 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0625 16:29:28.828465   54199 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0625 16:29:28.828479   54199 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0625 16:29:28.828488   54199 command_runner.go:130] > conmon_env = [
	I0625 16:29:28.828498   54199 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0625 16:29:28.828504   54199 command_runner.go:130] > ]
	I0625 16:29:28.828512   54199 command_runner.go:130] > # Additional environment variables to set for all the
	I0625 16:29:28.828524   54199 command_runner.go:130] > # containers. These are overridden if set in the
	I0625 16:29:28.828537   54199 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0625 16:29:28.828547   54199 command_runner.go:130] > # default_env = [
	I0625 16:29:28.828557   54199 command_runner.go:130] > # ]
	I0625 16:29:28.828566   54199 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0625 16:29:28.828579   54199 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0625 16:29:28.828585   54199 command_runner.go:130] > # selinux = false
	I0625 16:29:28.828594   54199 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0625 16:29:28.828609   54199 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0625 16:29:28.828622   54199 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0625 16:29:28.828631   54199 command_runner.go:130] > # seccomp_profile = ""
	I0625 16:29:28.828644   54199 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0625 16:29:28.828656   54199 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0625 16:29:28.828667   54199 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0625 16:29:28.828674   54199 command_runner.go:130] > # which might increase security.
	I0625 16:29:28.828682   54199 command_runner.go:130] > # This option is currently deprecated,
	I0625 16:29:28.828696   54199 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0625 16:29:28.828707   54199 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0625 16:29:28.828722   54199 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0625 16:29:28.828735   54199 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0625 16:29:28.828749   54199 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0625 16:29:28.828758   54199 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0625 16:29:28.828769   54199 command_runner.go:130] > # This option supports live configuration reload.
	I0625 16:29:28.828779   54199 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0625 16:29:28.828793   54199 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0625 16:29:28.828804   54199 command_runner.go:130] > # the cgroup blockio controller.
	I0625 16:29:28.828815   54199 command_runner.go:130] > # blockio_config_file = ""
	I0625 16:29:28.828828   54199 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0625 16:29:28.828837   54199 command_runner.go:130] > # blockio parameters.
	I0625 16:29:28.828841   54199 command_runner.go:130] > # blockio_reload = false
	I0625 16:29:28.828852   54199 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0625 16:29:28.828862   54199 command_runner.go:130] > # irqbalance daemon.
	I0625 16:29:28.828874   54199 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0625 16:29:28.828888   54199 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0625 16:29:28.828906   54199 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0625 16:29:28.828920   54199 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0625 16:29:28.828930   54199 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0625 16:29:28.828938   54199 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0625 16:29:28.828950   54199 command_runner.go:130] > # This option supports live configuration reload.
	I0625 16:29:28.828962   54199 command_runner.go:130] > # rdt_config_file = ""
	I0625 16:29:28.828972   54199 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0625 16:29:28.828983   54199 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0625 16:29:28.829006   54199 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0625 16:29:28.829014   54199 command_runner.go:130] > # separate_pull_cgroup = ""
	I0625 16:29:28.829021   54199 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0625 16:29:28.829033   54199 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0625 16:29:28.829043   54199 command_runner.go:130] > # will be added.
	I0625 16:29:28.829050   54199 command_runner.go:130] > # default_capabilities = [
	I0625 16:29:28.829059   54199 command_runner.go:130] > # 	"CHOWN",
	I0625 16:29:28.829069   54199 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0625 16:29:28.829078   54199 command_runner.go:130] > # 	"FSETID",
	I0625 16:29:28.829088   54199 command_runner.go:130] > # 	"FOWNER",
	I0625 16:29:28.829096   54199 command_runner.go:130] > # 	"SETGID",
	I0625 16:29:28.829104   54199 command_runner.go:130] > # 	"SETUID",
	I0625 16:29:28.829109   54199 command_runner.go:130] > # 	"SETPCAP",
	I0625 16:29:28.829120   54199 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0625 16:29:28.829129   54199 command_runner.go:130] > # 	"KILL",
	I0625 16:29:28.829138   54199 command_runner.go:130] > # ]
	I0625 16:29:28.829153   54199 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0625 16:29:28.829166   54199 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0625 16:29:28.829177   54199 command_runner.go:130] > # add_inheritable_capabilities = false
	I0625 16:29:28.829187   54199 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0625 16:29:28.829197   54199 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0625 16:29:28.829207   54199 command_runner.go:130] > default_sysctls = [
	I0625 16:29:28.829219   54199 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0625 16:29:28.829227   54199 command_runner.go:130] > ]
	I0625 16:29:28.829238   54199 command_runner.go:130] > # List of devices on the host that a
	I0625 16:29:28.829251   54199 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0625 16:29:28.829261   54199 command_runner.go:130] > # allowed_devices = [
	I0625 16:29:28.829267   54199 command_runner.go:130] > # 	"/dev/fuse",
	I0625 16:29:28.829273   54199 command_runner.go:130] > # ]
	I0625 16:29:28.829278   54199 command_runner.go:130] > # List of additional devices. specified as
	I0625 16:29:28.829294   54199 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0625 16:29:28.829306   54199 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0625 16:29:28.829318   54199 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0625 16:29:28.829329   54199 command_runner.go:130] > # additional_devices = [
	I0625 16:29:28.829337   54199 command_runner.go:130] > # ]
	I0625 16:29:28.829354   54199 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0625 16:29:28.829362   54199 command_runner.go:130] > # cdi_spec_dirs = [
	I0625 16:29:28.829366   54199 command_runner.go:130] > # 	"/etc/cdi",
	I0625 16:29:28.829376   54199 command_runner.go:130] > # 	"/var/run/cdi",
	I0625 16:29:28.829385   54199 command_runner.go:130] > # ]
	I0625 16:29:28.829398   54199 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0625 16:29:28.829411   54199 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0625 16:29:28.829420   54199 command_runner.go:130] > # Defaults to false.
	I0625 16:29:28.829432   54199 command_runner.go:130] > # device_ownership_from_security_context = false
	I0625 16:29:28.829444   54199 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0625 16:29:28.829452   54199 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0625 16:29:28.829458   54199 command_runner.go:130] > # hooks_dir = [
	I0625 16:29:28.829470   54199 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0625 16:29:28.829478   54199 command_runner.go:130] > # ]
	I0625 16:29:28.829490   54199 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0625 16:29:28.829503   54199 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0625 16:29:28.829514   54199 command_runner.go:130] > # its default mounts from the following two files:
	I0625 16:29:28.829522   54199 command_runner.go:130] > #
	I0625 16:29:28.829530   54199 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0625 16:29:28.829540   54199 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0625 16:29:28.829553   54199 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0625 16:29:28.829561   54199 command_runner.go:130] > #
	I0625 16:29:28.829574   54199 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0625 16:29:28.829588   54199 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0625 16:29:28.829601   54199 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0625 16:29:28.829612   54199 command_runner.go:130] > #      only add mounts it finds in this file.
	I0625 16:29:28.829618   54199 command_runner.go:130] > #
	I0625 16:29:28.829622   54199 command_runner.go:130] > # default_mounts_file = ""
	I0625 16:29:28.829634   54199 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0625 16:29:28.829648   54199 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0625 16:29:28.829657   54199 command_runner.go:130] > pids_limit = 1024
	I0625 16:29:28.829670   54199 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0625 16:29:28.829683   54199 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0625 16:29:28.829695   54199 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0625 16:29:28.829707   54199 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0625 16:29:28.829716   54199 command_runner.go:130] > # log_size_max = -1
	I0625 16:29:28.829731   54199 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0625 16:29:28.829741   54199 command_runner.go:130] > # log_to_journald = false
	I0625 16:29:28.829754   54199 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0625 16:29:28.829764   54199 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0625 16:29:28.829776   54199 command_runner.go:130] > # Path to directory for container attach sockets.
	I0625 16:29:28.829787   54199 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0625 16:29:28.829795   54199 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0625 16:29:28.829800   54199 command_runner.go:130] > # bind_mount_prefix = ""
	I0625 16:29:28.829813   54199 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0625 16:29:28.829823   54199 command_runner.go:130] > # read_only = false
	I0625 16:29:28.829836   54199 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0625 16:29:28.829848   54199 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0625 16:29:28.829858   54199 command_runner.go:130] > # live configuration reload.
	I0625 16:29:28.829869   54199 command_runner.go:130] > # log_level = "info"
	I0625 16:29:28.829879   54199 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0625 16:29:28.829887   54199 command_runner.go:130] > # This option supports live configuration reload.
	I0625 16:29:28.829896   54199 command_runner.go:130] > # log_filter = ""
	I0625 16:29:28.829910   54199 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0625 16:29:28.829925   54199 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0625 16:29:28.829935   54199 command_runner.go:130] > # separated by comma.
	I0625 16:29:28.829951   54199 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0625 16:29:28.829960   54199 command_runner.go:130] > # uid_mappings = ""
	I0625 16:29:28.829969   54199 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0625 16:29:28.829982   54199 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0625 16:29:28.829993   54199 command_runner.go:130] > # separated by comma.
	I0625 16:29:28.830008   54199 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0625 16:29:28.830018   54199 command_runner.go:130] > # gid_mappings = ""
	I0625 16:29:28.830027   54199 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0625 16:29:28.830040   54199 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0625 16:29:28.830050   54199 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0625 16:29:28.830064   54199 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0625 16:29:28.830075   54199 command_runner.go:130] > # minimum_mappable_uid = -1
	I0625 16:29:28.830089   54199 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0625 16:29:28.830101   54199 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0625 16:29:28.830115   54199 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0625 16:29:28.830130   54199 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0625 16:29:28.830138   54199 command_runner.go:130] > # minimum_mappable_gid = -1
	I0625 16:29:28.830145   54199 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0625 16:29:28.830158   54199 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0625 16:29:28.830172   54199 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0625 16:29:28.830182   54199 command_runner.go:130] > # ctr_stop_timeout = 30
	I0625 16:29:28.830191   54199 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0625 16:29:28.830204   54199 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0625 16:29:28.830215   54199 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0625 16:29:28.830223   54199 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0625 16:29:28.830228   54199 command_runner.go:130] > drop_infra_ctr = false
	I0625 16:29:28.830240   54199 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0625 16:29:28.830253   54199 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0625 16:29:28.830268   54199 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0625 16:29:28.830278   54199 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0625 16:29:28.830292   54199 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0625 16:29:28.830304   54199 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0625 16:29:28.830313   54199 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0625 16:29:28.830323   54199 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0625 16:29:28.830334   54199 command_runner.go:130] > # shared_cpuset = ""
	I0625 16:29:28.830347   54199 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0625 16:29:28.830363   54199 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0625 16:29:28.830372   54199 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0625 16:29:28.830386   54199 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0625 16:29:28.830395   54199 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0625 16:29:28.830400   54199 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0625 16:29:28.830413   54199 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0625 16:29:28.830423   54199 command_runner.go:130] > # enable_criu_support = false
	I0625 16:29:28.830435   54199 command_runner.go:130] > # Enable/disable the generation of the container,
	I0625 16:29:28.830448   54199 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0625 16:29:28.830458   54199 command_runner.go:130] > # enable_pod_events = false
	I0625 16:29:28.830482   54199 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0625 16:29:28.830496   54199 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0625 16:29:28.830507   54199 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0625 16:29:28.830517   54199 command_runner.go:130] > # default_runtime = "runc"
	I0625 16:29:28.830529   54199 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0625 16:29:28.830542   54199 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0625 16:29:28.830558   54199 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0625 16:29:28.830570   54199 command_runner.go:130] > # creation as a file is not desired either.
	I0625 16:29:28.830584   54199 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0625 16:29:28.830596   54199 command_runner.go:130] > # the hostname is being managed dynamically.
	I0625 16:29:28.830606   54199 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0625 16:29:28.830615   54199 command_runner.go:130] > # ]
	I0625 16:29:28.830626   54199 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0625 16:29:28.830636   54199 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0625 16:29:28.830650   54199 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0625 16:29:28.830662   54199 command_runner.go:130] > # Each entry in the table should follow the format:
	I0625 16:29:28.830671   54199 command_runner.go:130] > #
	I0625 16:29:28.830678   54199 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0625 16:29:28.830689   54199 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0625 16:29:28.830713   54199 command_runner.go:130] > # runtime_type = "oci"
	I0625 16:29:28.830721   54199 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0625 16:29:28.830734   54199 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0625 16:29:28.830746   54199 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0625 16:29:28.830757   54199 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0625 16:29:28.830767   54199 command_runner.go:130] > # monitor_env = []
	I0625 16:29:28.830778   54199 command_runner.go:130] > # privileged_without_host_devices = false
	I0625 16:29:28.830788   54199 command_runner.go:130] > # allowed_annotations = []
	I0625 16:29:28.830798   54199 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0625 16:29:28.830804   54199 command_runner.go:130] > # Where:
	I0625 16:29:28.830812   54199 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0625 16:29:28.830827   54199 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0625 16:29:28.830840   54199 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0625 16:29:28.830853   54199 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0625 16:29:28.830862   54199 command_runner.go:130] > #   in $PATH.
	I0625 16:29:28.830875   54199 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0625 16:29:28.830884   54199 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0625 16:29:28.830893   54199 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0625 16:29:28.830902   54199 command_runner.go:130] > #   state.
	I0625 16:29:28.830916   54199 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0625 16:29:28.830929   54199 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0625 16:29:28.830943   54199 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0625 16:29:28.830955   54199 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0625 16:29:28.830968   54199 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0625 16:29:28.830978   54199 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0625 16:29:28.830988   54199 command_runner.go:130] > #   The currently recognized values are:
	I0625 16:29:28.831003   54199 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0625 16:29:28.831018   54199 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0625 16:29:28.831030   54199 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0625 16:29:28.831043   54199 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0625 16:29:28.831056   54199 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0625 16:29:28.831067   54199 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0625 16:29:28.831081   54199 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0625 16:29:28.831095   54199 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0625 16:29:28.831108   54199 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0625 16:29:28.831121   54199 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0625 16:29:28.831131   54199 command_runner.go:130] > #   deprecated option "conmon".
	I0625 16:29:28.831142   54199 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0625 16:29:28.831148   54199 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0625 16:29:28.831159   54199 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0625 16:29:28.831172   54199 command_runner.go:130] > #   should be moved to the container's cgroup
	I0625 16:29:28.831187   54199 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0625 16:29:28.831197   54199 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0625 16:29:28.831211   54199 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0625 16:29:28.831222   54199 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0625 16:29:28.831229   54199 command_runner.go:130] > #
	I0625 16:29:28.831234   54199 command_runner.go:130] > # Using the seccomp notifier feature:
	I0625 16:29:28.831242   54199 command_runner.go:130] > #
	I0625 16:29:28.831255   54199 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0625 16:29:28.831269   54199 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0625 16:29:28.831277   54199 command_runner.go:130] > #
	I0625 16:29:28.831287   54199 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0625 16:29:28.831300   54199 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0625 16:29:28.831308   54199 command_runner.go:130] > #
	I0625 16:29:28.831317   54199 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0625 16:29:28.831325   54199 command_runner.go:130] > # feature.
	I0625 16:29:28.831330   54199 command_runner.go:130] > #
	I0625 16:29:28.831344   54199 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0625 16:29:28.831362   54199 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0625 16:29:28.831374   54199 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0625 16:29:28.831387   54199 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0625 16:29:28.831400   54199 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0625 16:29:28.831406   54199 command_runner.go:130] > #
	I0625 16:29:28.831414   54199 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0625 16:29:28.831428   54199 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0625 16:29:28.831437   54199 command_runner.go:130] > #
	I0625 16:29:28.831450   54199 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0625 16:29:28.831462   54199 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0625 16:29:28.831470   54199 command_runner.go:130] > #
	I0625 16:29:28.831483   54199 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0625 16:29:28.831491   54199 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0625 16:29:28.831497   54199 command_runner.go:130] > # limitation.
	I0625 16:29:28.831508   54199 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0625 16:29:28.831520   54199 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0625 16:29:28.831529   54199 command_runner.go:130] > runtime_type = "oci"
	I0625 16:29:28.831539   54199 command_runner.go:130] > runtime_root = "/run/runc"
	I0625 16:29:28.831548   54199 command_runner.go:130] > runtime_config_path = ""
	I0625 16:29:28.831556   54199 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0625 16:29:28.831565   54199 command_runner.go:130] > monitor_cgroup = "pod"
	I0625 16:29:28.831573   54199 command_runner.go:130] > monitor_exec_cgroup = ""
	I0625 16:29:28.831580   54199 command_runner.go:130] > monitor_env = [
	I0625 16:29:28.831589   54199 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0625 16:29:28.831598   54199 command_runner.go:130] > ]
	I0625 16:29:28.831608   54199 command_runner.go:130] > privileged_without_host_devices = false
	I0625 16:29:28.831621   54199 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0625 16:29:28.831633   54199 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0625 16:29:28.831646   54199 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0625 16:29:28.831659   54199 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0625 16:29:28.831672   54199 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0625 16:29:28.831686   54199 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0625 16:29:28.831704   54199 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0625 16:29:28.831719   54199 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0625 16:29:28.831731   54199 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0625 16:29:28.831745   54199 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0625 16:29:28.831751   54199 command_runner.go:130] > # Example:
	I0625 16:29:28.831756   54199 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0625 16:29:28.831763   54199 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0625 16:29:28.831768   54199 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0625 16:29:28.831777   54199 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0625 16:29:28.831786   54199 command_runner.go:130] > # cpuset = 0
	I0625 16:29:28.831796   54199 command_runner.go:130] > # cpushares = "0-1"
	I0625 16:29:28.831805   54199 command_runner.go:130] > # Where:
	I0625 16:29:28.831815   54199 command_runner.go:130] > # The workload name is workload-type.
	I0625 16:29:28.831830   54199 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0625 16:29:28.831842   54199 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0625 16:29:28.831852   54199 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0625 16:29:28.831861   54199 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0625 16:29:28.831869   54199 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0625 16:29:28.831874   54199 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0625 16:29:28.831882   54199 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0625 16:29:28.831888   54199 command_runner.go:130] > # Default value is set to true
	I0625 16:29:28.831893   54199 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0625 16:29:28.831900   54199 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0625 16:29:28.831905   54199 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0625 16:29:28.831912   54199 command_runner.go:130] > # Default value is set to 'false'
	I0625 16:29:28.831916   54199 command_runner.go:130] > # disable_hostport_mapping = false
	I0625 16:29:28.831925   54199 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0625 16:29:28.831929   54199 command_runner.go:130] > #
	I0625 16:29:28.831939   54199 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0625 16:29:28.831949   54199 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0625 16:29:28.831959   54199 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0625 16:29:28.831969   54199 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0625 16:29:28.831978   54199 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0625 16:29:28.831983   54199 command_runner.go:130] > [crio.image]
	I0625 16:29:28.831993   54199 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0625 16:29:28.831998   54199 command_runner.go:130] > # default_transport = "docker://"
	I0625 16:29:28.832003   54199 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0625 16:29:28.832009   54199 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0625 16:29:28.832013   54199 command_runner.go:130] > # global_auth_file = ""
	I0625 16:29:28.832018   54199 command_runner.go:130] > # The image used to instantiate infra containers.
	I0625 16:29:28.832022   54199 command_runner.go:130] > # This option supports live configuration reload.
	I0625 16:29:28.832027   54199 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0625 16:29:28.832033   54199 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0625 16:29:28.832038   54199 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0625 16:29:28.832043   54199 command_runner.go:130] > # This option supports live configuration reload.
	I0625 16:29:28.832047   54199 command_runner.go:130] > # pause_image_auth_file = ""
	I0625 16:29:28.832052   54199 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0625 16:29:28.832057   54199 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0625 16:29:28.832063   54199 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0625 16:29:28.832068   54199 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0625 16:29:28.832072   54199 command_runner.go:130] > # pause_command = "/pause"
	I0625 16:29:28.832077   54199 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0625 16:29:28.832082   54199 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0625 16:29:28.832087   54199 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0625 16:29:28.832094   54199 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0625 16:29:28.832105   54199 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0625 16:29:28.832111   54199 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0625 16:29:28.832117   54199 command_runner.go:130] > # pinned_images = [
	I0625 16:29:28.832120   54199 command_runner.go:130] > # ]
	I0625 16:29:28.832127   54199 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0625 16:29:28.832135   54199 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0625 16:29:28.832142   54199 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0625 16:29:28.832153   54199 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0625 16:29:28.832166   54199 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0625 16:29:28.832174   54199 command_runner.go:130] > # signature_policy = ""
	I0625 16:29:28.832182   54199 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0625 16:29:28.832188   54199 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0625 16:29:28.832197   54199 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0625 16:29:28.832206   54199 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0625 16:29:28.832214   54199 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0625 16:29:28.832218   54199 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0625 16:29:28.832226   54199 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0625 16:29:28.832236   54199 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0625 16:29:28.832242   54199 command_runner.go:130] > # changing them here.
	I0625 16:29:28.832246   54199 command_runner.go:130] > # insecure_registries = [
	I0625 16:29:28.832252   54199 command_runner.go:130] > # ]
	I0625 16:29:28.832258   54199 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0625 16:29:28.832265   54199 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0625 16:29:28.832269   54199 command_runner.go:130] > # image_volumes = "mkdir"
	I0625 16:29:28.832278   54199 command_runner.go:130] > # Temporary directory to use for storing big files
	I0625 16:29:28.832282   54199 command_runner.go:130] > # big_files_temporary_dir = ""
	I0625 16:29:28.832288   54199 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0625 16:29:28.832294   54199 command_runner.go:130] > # CNI plugins.
	I0625 16:29:28.832298   54199 command_runner.go:130] > [crio.network]
	I0625 16:29:28.832305   54199 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0625 16:29:28.832313   54199 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0625 16:29:28.832317   54199 command_runner.go:130] > # cni_default_network = ""
	I0625 16:29:28.832325   54199 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0625 16:29:28.832330   54199 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0625 16:29:28.832336   54199 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0625 16:29:28.832342   54199 command_runner.go:130] > # plugin_dirs = [
	I0625 16:29:28.832346   54199 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0625 16:29:28.832355   54199 command_runner.go:130] > # ]
	I0625 16:29:28.832361   54199 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0625 16:29:28.832367   54199 command_runner.go:130] > [crio.metrics]
	I0625 16:29:28.832372   54199 command_runner.go:130] > # Globally enable or disable metrics support.
	I0625 16:29:28.832378   54199 command_runner.go:130] > enable_metrics = true
	I0625 16:29:28.832383   54199 command_runner.go:130] > # Specify enabled metrics collectors.
	I0625 16:29:28.832390   54199 command_runner.go:130] > # Per default all metrics are enabled.
	I0625 16:29:28.832396   54199 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0625 16:29:28.832404   54199 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0625 16:29:28.832412   54199 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0625 16:29:28.832419   54199 command_runner.go:130] > # metrics_collectors = [
	I0625 16:29:28.832422   54199 command_runner.go:130] > # 	"operations",
	I0625 16:29:28.832430   54199 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0625 16:29:28.832434   54199 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0625 16:29:28.832438   54199 command_runner.go:130] > # 	"operations_errors",
	I0625 16:29:28.832443   54199 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0625 16:29:28.832449   54199 command_runner.go:130] > # 	"image_pulls_by_name",
	I0625 16:29:28.832454   54199 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0625 16:29:28.832461   54199 command_runner.go:130] > # 	"image_pulls_failures",
	I0625 16:29:28.832465   54199 command_runner.go:130] > # 	"image_pulls_successes",
	I0625 16:29:28.832472   54199 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0625 16:29:28.832476   54199 command_runner.go:130] > # 	"image_layer_reuse",
	I0625 16:29:28.832482   54199 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0625 16:29:28.832487   54199 command_runner.go:130] > # 	"containers_oom_total",
	I0625 16:29:28.832493   54199 command_runner.go:130] > # 	"containers_oom",
	I0625 16:29:28.832496   54199 command_runner.go:130] > # 	"processes_defunct",
	I0625 16:29:28.832502   54199 command_runner.go:130] > # 	"operations_total",
	I0625 16:29:28.832507   54199 command_runner.go:130] > # 	"operations_latency_seconds",
	I0625 16:29:28.832514   54199 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0625 16:29:28.832518   54199 command_runner.go:130] > # 	"operations_errors_total",
	I0625 16:29:28.832524   54199 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0625 16:29:28.832528   54199 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0625 16:29:28.832533   54199 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0625 16:29:28.832543   54199 command_runner.go:130] > # 	"image_pulls_success_total",
	I0625 16:29:28.832554   54199 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0625 16:29:28.832563   54199 command_runner.go:130] > # 	"containers_oom_count_total",
	I0625 16:29:28.832570   54199 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0625 16:29:28.832575   54199 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0625 16:29:28.832578   54199 command_runner.go:130] > # ]
	I0625 16:29:28.832583   54199 command_runner.go:130] > # The port on which the metrics server will listen.
	I0625 16:29:28.832589   54199 command_runner.go:130] > # metrics_port = 9090
	I0625 16:29:28.832594   54199 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0625 16:29:28.832600   54199 command_runner.go:130] > # metrics_socket = ""
	I0625 16:29:28.832605   54199 command_runner.go:130] > # The certificate for the secure metrics server.
	I0625 16:29:28.832613   54199 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0625 16:29:28.832619   54199 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0625 16:29:28.832625   54199 command_runner.go:130] > # certificate on any modification event.
	I0625 16:29:28.832629   54199 command_runner.go:130] > # metrics_cert = ""
	I0625 16:29:28.832637   54199 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0625 16:29:28.832641   54199 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0625 16:29:28.832647   54199 command_runner.go:130] > # metrics_key = ""
	I0625 16:29:28.832653   54199 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0625 16:29:28.832659   54199 command_runner.go:130] > [crio.tracing]
	I0625 16:29:28.832665   54199 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0625 16:29:28.832670   54199 command_runner.go:130] > # enable_tracing = false
	I0625 16:29:28.832677   54199 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0625 16:29:28.832683   54199 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0625 16:29:28.832690   54199 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0625 16:29:28.832697   54199 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0625 16:29:28.832701   54199 command_runner.go:130] > # CRI-O NRI configuration.
	I0625 16:29:28.832707   54199 command_runner.go:130] > [crio.nri]
	I0625 16:29:28.832711   54199 command_runner.go:130] > # Globally enable or disable NRI.
	I0625 16:29:28.832717   54199 command_runner.go:130] > # enable_nri = false
	I0625 16:29:28.832721   54199 command_runner.go:130] > # NRI socket to listen on.
	I0625 16:29:28.832728   54199 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0625 16:29:28.832732   54199 command_runner.go:130] > # NRI plugin directory to use.
	I0625 16:29:28.832739   54199 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0625 16:29:28.832744   54199 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0625 16:29:28.832751   54199 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0625 16:29:28.832756   54199 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0625 16:29:28.832763   54199 command_runner.go:130] > # nri_disable_connections = false
	I0625 16:29:28.832768   54199 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0625 16:29:28.832775   54199 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0625 16:29:28.832780   54199 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0625 16:29:28.832787   54199 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0625 16:29:28.832793   54199 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0625 16:29:28.832798   54199 command_runner.go:130] > [crio.stats]
	I0625 16:29:28.832803   54199 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0625 16:29:28.832811   54199 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0625 16:29:28.832815   54199 command_runner.go:130] > # stats_collection_period = 0
	I0625 16:29:28.832922   54199 cni.go:84] Creating CNI manager for ""
	I0625 16:29:28.832932   54199 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0625 16:29:28.832939   54199 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0625 16:29:28.832958   54199 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-552402 NodeName:multinode-552402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0625 16:29:28.833084   54199 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-552402"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0625 16:29:28.833141   54199 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0625 16:29:28.844146   54199 command_runner.go:130] > kubeadm
	I0625 16:29:28.844160   54199 command_runner.go:130] > kubectl
	I0625 16:29:28.844164   54199 command_runner.go:130] > kubelet
	I0625 16:29:28.844214   54199 binaries.go:44] Found k8s binaries, skipping transfer
	I0625 16:29:28.844258   54199 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0625 16:29:28.854259   54199 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0625 16:29:28.870641   54199 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 16:29:28.886494   54199 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0625 16:29:28.902655   54199 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I0625 16:29:28.906953   54199 command_runner.go:130] > 192.168.39.231	control-plane.minikube.internal
	I0625 16:29:28.907032   54199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:29:29.044061   54199 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 16:29:29.058261   54199 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402 for IP: 192.168.39.231
	I0625 16:29:29.058276   54199 certs.go:194] generating shared ca certs ...
	I0625 16:29:29.058296   54199 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:29:29.058446   54199 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 16:29:29.058505   54199 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 16:29:29.058516   54199 certs.go:256] generating profile certs ...
	I0625 16:29:29.058592   54199 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/client.key
	I0625 16:29:29.058647   54199 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/apiserver.key.0cdd1bbb
	I0625 16:29:29.058688   54199 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/proxy-client.key
	I0625 16:29:29.058698   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0625 16:29:29.058709   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0625 16:29:29.058722   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0625 16:29:29.058732   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0625 16:29:29.058741   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0625 16:29:29.058752   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0625 16:29:29.058764   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0625 16:29:29.058772   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0625 16:29:29.058822   54199 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem (1338 bytes)
	W0625 16:29:29.058847   54199 certs.go:480] ignoring /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239_empty.pem, impossibly tiny 0 bytes
	I0625 16:29:29.058858   54199 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 16:29:29.058879   54199 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 16:29:29.058901   54199 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 16:29:29.058921   54199 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 16:29:29.058996   54199 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:29:29.059027   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:29:29.059040   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem -> /usr/share/ca-certificates/21239.pem
	I0625 16:29:29.059049   54199 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> /usr/share/ca-certificates/212392.pem
	I0625 16:29:29.059571   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 16:29:29.083401   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 16:29:29.106823   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 16:29:29.132199   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 16:29:29.155401   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0625 16:29:29.178490   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0625 16:29:29.201308   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 16:29:29.225004   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/multinode-552402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0625 16:29:29.248552   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 16:29:29.271690   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem --> /usr/share/ca-certificates/21239.pem (1338 bytes)
	I0625 16:29:29.295084   54199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /usr/share/ca-certificates/212392.pem (1708 bytes)
	I0625 16:29:29.319053   54199 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0625 16:29:29.335076   54199 ssh_runner.go:195] Run: openssl version
	I0625 16:29:29.340692   54199 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0625 16:29:29.340759   54199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21239.pem && ln -fs /usr/share/ca-certificates/21239.pem /etc/ssl/certs/21239.pem"
	I0625 16:29:29.351365   54199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21239.pem
	I0625 16:29:29.355719   54199 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 16:29:29.355782   54199 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 16:29:29.355830   54199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21239.pem
	I0625 16:29:29.361364   54199 command_runner.go:130] > 51391683
	I0625 16:29:29.361403   54199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21239.pem /etc/ssl/certs/51391683.0"
	I0625 16:29:29.370379   54199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212392.pem && ln -fs /usr/share/ca-certificates/212392.pem /etc/ssl/certs/212392.pem"
	I0625 16:29:29.380666   54199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212392.pem
	I0625 16:29:29.385081   54199 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 16:29:29.385107   54199 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 16:29:29.385131   54199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212392.pem
	I0625 16:29:29.390645   54199 command_runner.go:130] > 3ec20f2e
	I0625 16:29:29.390686   54199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/212392.pem /etc/ssl/certs/3ec20f2e.0"
	I0625 16:29:29.399808   54199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 16:29:29.409958   54199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:29:29.414415   54199 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:29:29.414604   54199 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:29:29.414637   54199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:29:29.419943   54199 command_runner.go:130] > b5213941
	I0625 16:29:29.420141   54199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 16:29:29.429026   54199 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 16:29:29.433375   54199 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 16:29:29.433395   54199 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0625 16:29:29.433402   54199 command_runner.go:130] > Device: 253,1	Inode: 1057301     Links: 1
	I0625 16:29:29.433412   54199 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0625 16:29:29.433421   54199 command_runner.go:130] > Access: 2024-06-25 16:23:19.759396760 +0000
	I0625 16:29:29.433433   54199 command_runner.go:130] > Modify: 2024-06-25 16:23:19.759396760 +0000
	I0625 16:29:29.433447   54199 command_runner.go:130] > Change: 2024-06-25 16:23:19.759396760 +0000
	I0625 16:29:29.433456   54199 command_runner.go:130] >  Birth: 2024-06-25 16:23:19.759396760 +0000
	I0625 16:29:29.433497   54199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0625 16:29:29.438822   54199 command_runner.go:130] > Certificate will not expire
	I0625 16:29:29.439030   54199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0625 16:29:29.444380   54199 command_runner.go:130] > Certificate will not expire
	I0625 16:29:29.444432   54199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0625 16:29:29.449637   54199 command_runner.go:130] > Certificate will not expire
	I0625 16:29:29.449779   54199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0625 16:29:29.455346   54199 command_runner.go:130] > Certificate will not expire
	I0625 16:29:29.455403   54199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0625 16:29:29.460604   54199 command_runner.go:130] > Certificate will not expire
	I0625 16:29:29.460804   54199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0625 16:29:29.466092   54199 command_runner.go:130] > Certificate will not expire
	I0625 16:29:29.466150   54199 kubeadm.go:391] StartCluster: {Name:multinode-552402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-552402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.177 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:29:29.466246   54199 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0625 16:29:29.466288   54199 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0625 16:29:29.502737   54199 command_runner.go:130] > d4f00ecc70fc073d3550f6c89dbb15c1b77b863e7713a761a495c0274be411af
	I0625 16:29:29.502761   54199 command_runner.go:130] > 9e3cf9de6e7ead6b52148dcd4955b58900a7d8518f1f51123b6e1e3d75fcc3e1
	I0625 16:29:29.502774   54199 command_runner.go:130] > dada3d77ec88472cd180091e075101888927a1b93a58d88bd7378fbe100d3045
	I0625 16:29:29.502784   54199 command_runner.go:130] > f74159477c5e02e00dfb27d653217dc9b2d7693cee6730c6af252cf01c5572db
	I0625 16:29:29.502795   54199 command_runner.go:130] > 56b7ee056128dc759220644aa7dc88d47b282cf6f68c6ce88244ec9bef2de09c
	I0625 16:29:29.502805   54199 command_runner.go:130] > 79cd6519b497f35ff1e9ac8c6377ada466699c880f80fd08e64500e8964072a8
	I0625 16:29:29.502817   54199 command_runner.go:130] > 74a9d37ff49363320821cbe35e106f17871f1049d961ffc41b0531aeccfc735f
	I0625 16:29:29.502830   54199 command_runner.go:130] > bd920691a329ba6c3778d2ce3bfd1a1d43b9b4ecd0e0ebe6a6dc63bdfbbe887d
	I0625 16:29:29.502852   54199 cri.go:89] found id: "d4f00ecc70fc073d3550f6c89dbb15c1b77b863e7713a761a495c0274be411af"
	I0625 16:29:29.502863   54199 cri.go:89] found id: "9e3cf9de6e7ead6b52148dcd4955b58900a7d8518f1f51123b6e1e3d75fcc3e1"
	I0625 16:29:29.502868   54199 cri.go:89] found id: "dada3d77ec88472cd180091e075101888927a1b93a58d88bd7378fbe100d3045"
	I0625 16:29:29.502877   54199 cri.go:89] found id: "f74159477c5e02e00dfb27d653217dc9b2d7693cee6730c6af252cf01c5572db"
	I0625 16:29:29.502881   54199 cri.go:89] found id: "56b7ee056128dc759220644aa7dc88d47b282cf6f68c6ce88244ec9bef2de09c"
	I0625 16:29:29.502886   54199 cri.go:89] found id: "79cd6519b497f35ff1e9ac8c6377ada466699c880f80fd08e64500e8964072a8"
	I0625 16:29:29.502890   54199 cri.go:89] found id: "74a9d37ff49363320821cbe35e106f17871f1049d961ffc41b0531aeccfc735f"
	I0625 16:29:29.502896   54199 cri.go:89] found id: "bd920691a329ba6c3778d2ce3bfd1a1d43b9b4ecd0e0ebe6a6dc63bdfbbe887d"
	I0625 16:29:29.502901   54199 cri.go:89] found id: ""
	I0625 16:29:29.502946   54199 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.184859029Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719333198184837862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ed90cce-4078-4049-82c6-4c7cc74e0658 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.185612431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc9a66a2-e544-4485-91da-0e937f9e7733 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.185684647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc9a66a2-e544-4485-91da-0e937f9e7733 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.186154708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94303a695fe4d81b2707a7de43cdc991378b8299206a0e1e25904e2f455cb8ab,PodSandboxId:dd6de279d05d1dba66fe6175dab37b54fea09d279e06975e7e5cca2e3ca47324,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719333009536563017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40d59741ee01b451097aa7966de7d23a2e74c39a2622e3cc802154ffc4dd4c53,PodSandboxId:4d3d31c83b9c757d945ce1f380567d2cb0c636493ba55e3ae8c045f93ac76ee5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719332975911802419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8647b618ee7b9b7796293cfccaaa79f29452b9ad19f19bd4bf4f5371f911f3ad,PodSandboxId:bda3a47a30d8c1b6ae2548a0a982958dcfcad03512d970765af86cff6f824b35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719332975759627375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97a7bd7222a87e610f58087561023d776edcd8cbb43a5a5b9c57657b895ccf,PodSandboxId:c5027149117e2f151cd3d190cc9399c7c7b8c5d3af1865417001d03e9c5b028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719332975729150194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8462e7859192761c30c0ab03423aa6ffa0af7ab3f9a1b1ac724a99b2c73716b,PodSandboxId:a978ac88feb2ac6cc9734d24177b98dd5aefd1a45e60d6bb4aca9fe8ec6fc6ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1719332975729949551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.ku
bernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3410e4d3e33976711626b2e78ed9f2c95d4fab7ae14ffb4db21293db4b1d5d00,PodSandboxId:15e75468ce45230526d0e92a918e1a217a5b2d1f8111666256f12218b2c3f769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719332971961179305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf530af755b58043cd84f310c01986cfe5f2a354d4e6102e40d465ec3a96a81,PodSandboxId:bc9f7cf553f6aa5358b4ec70c5be99fd89a1e6145d4a0076995e42adb43ea697,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719332971906857100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.container.hash: 9e04
68f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4bf60afb2ebac2743296dbe43222df97b74f259f9aa5564423d6b35335f325,PodSandboxId:997e31d954726ed3eba59fdd19135300af4e25306f848e18746fb071a6134919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719332971924247628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.container.hash: 12806e87,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c383a4dc6b4ce55513c013b99411811ae775392a7c5c2ecd9c50299edf98bf,PodSandboxId:993e8b320da8aad2b7faf8f09b45956526e3c9cec836c71b3f757156675ff381,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719332971887729272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26129c27a9df69b5bad2e9ad7b5b053e3daf66ccb1a2833c454b8b33c3901d8,PodSandboxId:9cf1c28407eedb9fe47ee75a4593d7653ba0012a2854cccf4619962ab2543533,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719332672429667487,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f00ecc70fc073d3550f6c89dbb15c1b77b863e7713a761a495c0274be411af,PodSandboxId:45dca2bbc9e761cebbeaf38b9b0f82b6802937057683876c4cd34dcf4658440d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719332625591518494,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.kubernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3cf9de6e7ead6b52148dcd4955b58900a7d8518f1f51123b6e1e3d75fcc3e1,PodSandboxId:3461599c9ae5b8084dc3c9eae4f23cc1ab079ad7f03de781355e8d350fd7461b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719332624731045573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dada3d77ec88472cd180091e075101888927a1b93a58d88bd7378fbe100d3045,PodSandboxId:7ca324582eef881fb3ee2a303c68dafc8088ead0efee3c38ca177db602c9a6f3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719332623000929635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74159477c5e02e00dfb27d653217dc9b2d7693cee6730c6af252cf01c5572db,PodSandboxId:948aee8fb658d4e608304b1783868152c397c5980937eb797efaa066360d130e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719332622671325347,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-
62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b7ee056128dc759220644aa7dc88d47b282cf6f68c6ce88244ec9bef2de09c,PodSandboxId:8c5a93cba3030028a9fda40545ca2e8a936cc10e424196a543be22574fde5ec5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719332603272043176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},
Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a9d37ff49363320821cbe35e106f17871f1049d961ffc41b0531aeccfc735f,PodSandboxId:08a5de5a0d950dd3b55524a12fd016dc0f5529ddd3b71786c7a561ba6c073767,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719332603209821793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 12806e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cd6519b497f35ff1e9ac8c6377ada466699c880f80fd08e64500e8964072a8,PodSandboxId:983a83971fdcd6758a676a322438c8b91d38d2bba42eee049e2f037f17b9b2e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719332603220711629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd920691a329ba6c3778d2ce3bfd1a1d43b9b4ecd0e0ebe6a6dc63bdfbbe887d,PodSandboxId:f4a086dccd71fd3a824b232f8e9cb32d36de35cfc549217ff7057c61c47d9eed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719332603171686785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc9a66a2-e544-4485-91da-0e937f9e7733 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.231687031Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0c25aa6a-bae1-443f-a430-ae3d74d2b721 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.231774437Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0c25aa6a-bae1-443f-a430-ae3d74d2b721 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.232739087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22b68967-bc6e-4da0-8fe9-1144456ff585 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.233246926Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719333198233227169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22b68967-bc6e-4da0-8fe9-1144456ff585 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.233803424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b5922f1-8c84-4837-863a-4c60b0389ecf name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.233903839Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b5922f1-8c84-4837-863a-4c60b0389ecf name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.234280327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94303a695fe4d81b2707a7de43cdc991378b8299206a0e1e25904e2f455cb8ab,PodSandboxId:dd6de279d05d1dba66fe6175dab37b54fea09d279e06975e7e5cca2e3ca47324,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719333009536563017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40d59741ee01b451097aa7966de7d23a2e74c39a2622e3cc802154ffc4dd4c53,PodSandboxId:4d3d31c83b9c757d945ce1f380567d2cb0c636493ba55e3ae8c045f93ac76ee5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719332975911802419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8647b618ee7b9b7796293cfccaaa79f29452b9ad19f19bd4bf4f5371f911f3ad,PodSandboxId:bda3a47a30d8c1b6ae2548a0a982958dcfcad03512d970765af86cff6f824b35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719332975759627375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97a7bd7222a87e610f58087561023d776edcd8cbb43a5a5b9c57657b895ccf,PodSandboxId:c5027149117e2f151cd3d190cc9399c7c7b8c5d3af1865417001d03e9c5b028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719332975729150194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8462e7859192761c30c0ab03423aa6ffa0af7ab3f9a1b1ac724a99b2c73716b,PodSandboxId:a978ac88feb2ac6cc9734d24177b98dd5aefd1a45e60d6bb4aca9fe8ec6fc6ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1719332975729949551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.ku
bernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3410e4d3e33976711626b2e78ed9f2c95d4fab7ae14ffb4db21293db4b1d5d00,PodSandboxId:15e75468ce45230526d0e92a918e1a217a5b2d1f8111666256f12218b2c3f769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719332971961179305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf530af755b58043cd84f310c01986cfe5f2a354d4e6102e40d465ec3a96a81,PodSandboxId:bc9f7cf553f6aa5358b4ec70c5be99fd89a1e6145d4a0076995e42adb43ea697,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719332971906857100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.container.hash: 9e04
68f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4bf60afb2ebac2743296dbe43222df97b74f259f9aa5564423d6b35335f325,PodSandboxId:997e31d954726ed3eba59fdd19135300af4e25306f848e18746fb071a6134919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719332971924247628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.container.hash: 12806e87,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c383a4dc6b4ce55513c013b99411811ae775392a7c5c2ecd9c50299edf98bf,PodSandboxId:993e8b320da8aad2b7faf8f09b45956526e3c9cec836c71b3f757156675ff381,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719332971887729272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26129c27a9df69b5bad2e9ad7b5b053e3daf66ccb1a2833c454b8b33c3901d8,PodSandboxId:9cf1c28407eedb9fe47ee75a4593d7653ba0012a2854cccf4619962ab2543533,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719332672429667487,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f00ecc70fc073d3550f6c89dbb15c1b77b863e7713a761a495c0274be411af,PodSandboxId:45dca2bbc9e761cebbeaf38b9b0f82b6802937057683876c4cd34dcf4658440d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719332625591518494,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.kubernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3cf9de6e7ead6b52148dcd4955b58900a7d8518f1f51123b6e1e3d75fcc3e1,PodSandboxId:3461599c9ae5b8084dc3c9eae4f23cc1ab079ad7f03de781355e8d350fd7461b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719332624731045573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dada3d77ec88472cd180091e075101888927a1b93a58d88bd7378fbe100d3045,PodSandboxId:7ca324582eef881fb3ee2a303c68dafc8088ead0efee3c38ca177db602c9a6f3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719332623000929635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74159477c5e02e00dfb27d653217dc9b2d7693cee6730c6af252cf01c5572db,PodSandboxId:948aee8fb658d4e608304b1783868152c397c5980937eb797efaa066360d130e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719332622671325347,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-
62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b7ee056128dc759220644aa7dc88d47b282cf6f68c6ce88244ec9bef2de09c,PodSandboxId:8c5a93cba3030028a9fda40545ca2e8a936cc10e424196a543be22574fde5ec5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719332603272043176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},
Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a9d37ff49363320821cbe35e106f17871f1049d961ffc41b0531aeccfc735f,PodSandboxId:08a5de5a0d950dd3b55524a12fd016dc0f5529ddd3b71786c7a561ba6c073767,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719332603209821793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 12806e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cd6519b497f35ff1e9ac8c6377ada466699c880f80fd08e64500e8964072a8,PodSandboxId:983a83971fdcd6758a676a322438c8b91d38d2bba42eee049e2f037f17b9b2e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719332603220711629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd920691a329ba6c3778d2ce3bfd1a1d43b9b4ecd0e0ebe6a6dc63bdfbbe887d,PodSandboxId:f4a086dccd71fd3a824b232f8e9cb32d36de35cfc549217ff7057c61c47d9eed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719332603171686785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b5922f1-8c84-4837-863a-4c60b0389ecf name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.279057199Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e27eabc-4c7e-4bdd-874e-37e36f756aaf name=/runtime.v1.RuntimeService/Version
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.279144098Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e27eabc-4c7e-4bdd-874e-37e36f756aaf name=/runtime.v1.RuntimeService/Version
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.280326423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=322fd0fc-d2cb-42eb-b9b2-78ed8faf30a9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.280710659Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719333198280689801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=322fd0fc-d2cb-42eb-b9b2-78ed8faf30a9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.281261185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=220b5dff-ce55-4224-ac38-0213968a2fb0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.281334609Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=220b5dff-ce55-4224-ac38-0213968a2fb0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.281697471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94303a695fe4d81b2707a7de43cdc991378b8299206a0e1e25904e2f455cb8ab,PodSandboxId:dd6de279d05d1dba66fe6175dab37b54fea09d279e06975e7e5cca2e3ca47324,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719333009536563017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40d59741ee01b451097aa7966de7d23a2e74c39a2622e3cc802154ffc4dd4c53,PodSandboxId:4d3d31c83b9c757d945ce1f380567d2cb0c636493ba55e3ae8c045f93ac76ee5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719332975911802419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8647b618ee7b9b7796293cfccaaa79f29452b9ad19f19bd4bf4f5371f911f3ad,PodSandboxId:bda3a47a30d8c1b6ae2548a0a982958dcfcad03512d970765af86cff6f824b35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719332975759627375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97a7bd7222a87e610f58087561023d776edcd8cbb43a5a5b9c57657b895ccf,PodSandboxId:c5027149117e2f151cd3d190cc9399c7c7b8c5d3af1865417001d03e9c5b028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719332975729150194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8462e7859192761c30c0ab03423aa6ffa0af7ab3f9a1b1ac724a99b2c73716b,PodSandboxId:a978ac88feb2ac6cc9734d24177b98dd5aefd1a45e60d6bb4aca9fe8ec6fc6ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1719332975729949551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.ku
bernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3410e4d3e33976711626b2e78ed9f2c95d4fab7ae14ffb4db21293db4b1d5d00,PodSandboxId:15e75468ce45230526d0e92a918e1a217a5b2d1f8111666256f12218b2c3f769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719332971961179305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf530af755b58043cd84f310c01986cfe5f2a354d4e6102e40d465ec3a96a81,PodSandboxId:bc9f7cf553f6aa5358b4ec70c5be99fd89a1e6145d4a0076995e42adb43ea697,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719332971906857100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.container.hash: 9e04
68f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4bf60afb2ebac2743296dbe43222df97b74f259f9aa5564423d6b35335f325,PodSandboxId:997e31d954726ed3eba59fdd19135300af4e25306f848e18746fb071a6134919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719332971924247628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.container.hash: 12806e87,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c383a4dc6b4ce55513c013b99411811ae775392a7c5c2ecd9c50299edf98bf,PodSandboxId:993e8b320da8aad2b7faf8f09b45956526e3c9cec836c71b3f757156675ff381,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719332971887729272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26129c27a9df69b5bad2e9ad7b5b053e3daf66ccb1a2833c454b8b33c3901d8,PodSandboxId:9cf1c28407eedb9fe47ee75a4593d7653ba0012a2854cccf4619962ab2543533,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719332672429667487,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f00ecc70fc073d3550f6c89dbb15c1b77b863e7713a761a495c0274be411af,PodSandboxId:45dca2bbc9e761cebbeaf38b9b0f82b6802937057683876c4cd34dcf4658440d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719332625591518494,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.kubernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3cf9de6e7ead6b52148dcd4955b58900a7d8518f1f51123b6e1e3d75fcc3e1,PodSandboxId:3461599c9ae5b8084dc3c9eae4f23cc1ab079ad7f03de781355e8d350fd7461b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719332624731045573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dada3d77ec88472cd180091e075101888927a1b93a58d88bd7378fbe100d3045,PodSandboxId:7ca324582eef881fb3ee2a303c68dafc8088ead0efee3c38ca177db602c9a6f3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719332623000929635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74159477c5e02e00dfb27d653217dc9b2d7693cee6730c6af252cf01c5572db,PodSandboxId:948aee8fb658d4e608304b1783868152c397c5980937eb797efaa066360d130e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719332622671325347,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-
62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b7ee056128dc759220644aa7dc88d47b282cf6f68c6ce88244ec9bef2de09c,PodSandboxId:8c5a93cba3030028a9fda40545ca2e8a936cc10e424196a543be22574fde5ec5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719332603272043176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},
Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a9d37ff49363320821cbe35e106f17871f1049d961ffc41b0531aeccfc735f,PodSandboxId:08a5de5a0d950dd3b55524a12fd016dc0f5529ddd3b71786c7a561ba6c073767,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719332603209821793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 12806e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cd6519b497f35ff1e9ac8c6377ada466699c880f80fd08e64500e8964072a8,PodSandboxId:983a83971fdcd6758a676a322438c8b91d38d2bba42eee049e2f037f17b9b2e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719332603220711629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd920691a329ba6c3778d2ce3bfd1a1d43b9b4ecd0e0ebe6a6dc63bdfbbe887d,PodSandboxId:f4a086dccd71fd3a824b232f8e9cb32d36de35cfc549217ff7057c61c47d9eed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719332603171686785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=220b5dff-ce55-4224-ac38-0213968a2fb0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.321566902Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6579a4db-48b1-4a62-babc-ff1b5e5799a7 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.321649744Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6579a4db-48b1-4a62-babc-ff1b5e5799a7 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.323065356Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d3e5daf-2175-4605-9991-a9fdfbe0936f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.323458768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719333198323437693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d3e5daf-2175-4605-9991-a9fdfbe0936f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.324124521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=764c8a71-6236-4e65-b630-be156028f1a0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.324194115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=764c8a71-6236-4e65-b630-be156028f1a0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:33:18 multinode-552402 crio[2864]: time="2024-06-25 16:33:18.324587584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94303a695fe4d81b2707a7de43cdc991378b8299206a0e1e25904e2f455cb8ab,PodSandboxId:dd6de279d05d1dba66fe6175dab37b54fea09d279e06975e7e5cca2e3ca47324,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1719333009536563017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40d59741ee01b451097aa7966de7d23a2e74c39a2622e3cc802154ffc4dd4c53,PodSandboxId:4d3d31c83b9c757d945ce1f380567d2cb0c636493ba55e3ae8c045f93ac76ee5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1719332975911802419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8647b618ee7b9b7796293cfccaaa79f29452b9ad19f19bd4bf4f5371f911f3ad,PodSandboxId:bda3a47a30d8c1b6ae2548a0a982958dcfcad03512d970765af86cff6f824b35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719332975759627375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97a7bd7222a87e610f58087561023d776edcd8cbb43a5a5b9c57657b895ccf,PodSandboxId:c5027149117e2f151cd3d190cc9399c7c7b8c5d3af1865417001d03e9c5b028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719332975729150194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8462e7859192761c30c0ab03423aa6ffa0af7ab3f9a1b1ac724a99b2c73716b,PodSandboxId:a978ac88feb2ac6cc9734d24177b98dd5aefd1a45e60d6bb4aca9fe8ec6fc6ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1719332975729949551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.ku
bernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3410e4d3e33976711626b2e78ed9f2c95d4fab7ae14ffb4db21293db4b1d5d00,PodSandboxId:15e75468ce45230526d0e92a918e1a217a5b2d1f8111666256f12218b2c3f769,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719332971961179305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf530af755b58043cd84f310c01986cfe5f2a354d4e6102e40d465ec3a96a81,PodSandboxId:bc9f7cf553f6aa5358b4ec70c5be99fd89a1e6145d4a0076995e42adb43ea697,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719332971906857100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.container.hash: 9e04
68f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4bf60afb2ebac2743296dbe43222df97b74f259f9aa5564423d6b35335f325,PodSandboxId:997e31d954726ed3eba59fdd19135300af4e25306f848e18746fb071a6134919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719332971924247628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.container.hash: 12806e87,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c383a4dc6b4ce55513c013b99411811ae775392a7c5c2ecd9c50299edf98bf,PodSandboxId:993e8b320da8aad2b7faf8f09b45956526e3c9cec836c71b3f757156675ff381,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719332971887729272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26129c27a9df69b5bad2e9ad7b5b053e3daf66ccb1a2833c454b8b33c3901d8,PodSandboxId:9cf1c28407eedb9fe47ee75a4593d7653ba0012a2854cccf4619962ab2543533,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1719332672429667487,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-97579,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d15691ff-e95d-426b-9545-344419479d75,},Annotations:map[string]string{io.kubernetes.container.hash: f6b99b44,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f00ecc70fc073d3550f6c89dbb15c1b77b863e7713a761a495c0274be411af,PodSandboxId:45dca2bbc9e761cebbeaf38b9b0f82b6802937057683876c4cd34dcf4658440d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1719332625591518494,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638c610e-b5c1-40b3-8972-fbf36c6f1bf0,},Annotations:map[string]string{io.kubernetes.container.hash: aa604651,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3cf9de6e7ead6b52148dcd4955b58900a7d8518f1f51123b6e1e3d75fcc3e1,PodSandboxId:3461599c9ae5b8084dc3c9eae4f23cc1ab079ad7f03de781355e8d350fd7461b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719332624731045573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jf2ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3716b4c2-3417-4d41-8143-decc38ce93aa,},Annotations:map[string]string{io.kubernetes.container.hash: 140afee7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dada3d77ec88472cd180091e075101888927a1b93a58d88bd7378fbe100d3045,PodSandboxId:7ca324582eef881fb3ee2a303c68dafc8088ead0efee3c38ca177db602c9a6f3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1719332623000929635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6ctrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2de38f2c-e56d-43ca-acd6-537a2c8c36c9,},Annotations:map[string]string{io.kubernetes.container.hash: 44a71256,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74159477c5e02e00dfb27d653217dc9b2d7693cee6730c6af252cf01c5572db,PodSandboxId:948aee8fb658d4e608304b1783868152c397c5980937eb797efaa066360d130e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719332622671325347,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nphd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3247d1-08d6-4760-8ba1-
62cd6d3b7edb,},Annotations:map[string]string{io.kubernetes.container.hash: 95851791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b7ee056128dc759220644aa7dc88d47b282cf6f68c6ce88244ec9bef2de09c,PodSandboxId:8c5a93cba3030028a9fda40545ca2e8a936cc10e424196a543be22574fde5ec5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719332603272043176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2626fd7f4632883b6375eadd6d8a3d1f,},
Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a9d37ff49363320821cbe35e106f17871f1049d961ffc41b0531aeccfc735f,PodSandboxId:08a5de5a0d950dd3b55524a12fd016dc0f5529ddd3b71786c7a561ba6c073767,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719332603209821793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afcefe2172ce48b51be458f8b4b4ec40,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 12806e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cd6519b497f35ff1e9ac8c6377ada466699c880f80fd08e64500e8964072a8,PodSandboxId:983a83971fdcd6758a676a322438c8b91d38d2bba42eee049e2f037f17b9b2e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719332603220711629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f573733165d81dfacbc3765903f40e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd920691a329ba6c3778d2ce3bfd1a1d43b9b4ecd0e0ebe6a6dc63bdfbbe887d,PodSandboxId:f4a086dccd71fd3a824b232f8e9cb32d36de35cfc549217ff7057c61c47d9eed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719332603171686785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-552402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a56d62a295a356f75f3a9ab79148041,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=764c8a71-6236-4e65-b630-be156028f1a0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	94303a695fe4d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   dd6de279d05d1       busybox-fc5497c4f-97579
	40d59741ee01b       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               1                   4d3d31c83b9c7       kindnet-6ctrk
	8647b618ee7b9       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      3 minutes ago       Running             kube-proxy                1                   bda3a47a30d8c       kube-proxy-nphd7
	a8462e7859192       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   a978ac88feb2a       storage-provisioner
	ca97a7bd7222a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   c5027149117e2       coredns-7db6d8ff4d-jf2ds
	3410e4d3e3397       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      3 minutes ago       Running             kube-scheduler            1                   15e75468ce452       kube-scheduler-multinode-552402
	ea4bf60afb2eb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   997e31d954726       etcd-multinode-552402
	9bf530af755b5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      3 minutes ago       Running             kube-apiserver            1                   bc9f7cf553f6a       kube-apiserver-multinode-552402
	90c383a4dc6b4       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      3 minutes ago       Running             kube-controller-manager   1                   993e8b320da8a       kube-controller-manager-multinode-552402
	a26129c27a9df       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   9cf1c28407eed       busybox-fc5497c4f-97579
	d4f00ecc70fc0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   45dca2bbc9e76       storage-provisioner
	9e3cf9de6e7ea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   3461599c9ae5b       coredns-7db6d8ff4d-jf2ds
	dada3d77ec884       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      9 minutes ago       Exited              kindnet-cni               0                   7ca324582eef8       kindnet-6ctrk
	f74159477c5e0       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      9 minutes ago       Exited              kube-proxy                0                   948aee8fb658d       kube-proxy-nphd7
	56b7ee056128d       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      9 minutes ago       Exited              kube-scheduler            0                   8c5a93cba3030       kube-scheduler-multinode-552402
	79cd6519b497f       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      9 minutes ago       Exited              kube-controller-manager   0                   983a83971fdcd       kube-controller-manager-multinode-552402
	74a9d37ff4936       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Exited              etcd                      0                   08a5de5a0d950       etcd-multinode-552402
	bd920691a329b       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      9 minutes ago       Exited              kube-apiserver            0                   f4a086dccd71f       kube-apiserver-multinode-552402
	
	
	==> coredns [9e3cf9de6e7ead6b52148dcd4955b58900a7d8518f1f51123b6e1e3d75fcc3e1] <==
	[INFO] 10.244.0.3:41558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001760917s
	[INFO] 10.244.0.3:36998 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085187s
	[INFO] 10.244.0.3:38483 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006269s
	[INFO] 10.244.0.3:59326 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001054349s
	[INFO] 10.244.0.3:35112 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075641s
	[INFO] 10.244.0.3:56628 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00004065s
	[INFO] 10.244.0.3:54400 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090516s
	[INFO] 10.244.1.2:58618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169496s
	[INFO] 10.244.1.2:40742 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148609s
	[INFO] 10.244.1.2:44795 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085688s
	[INFO] 10.244.1.2:58327 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176335s
	[INFO] 10.244.0.3:57422 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095233s
	[INFO] 10.244.0.3:51491 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000435s
	[INFO] 10.244.0.3:57623 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000032653s
	[INFO] 10.244.0.3:37188 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031269s
	[INFO] 10.244.1.2:36422 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117611s
	[INFO] 10.244.1.2:36651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134098s
	[INFO] 10.244.1.2:58096 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105309s
	[INFO] 10.244.1.2:42834 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000090153s
	[INFO] 10.244.0.3:51313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116702s
	[INFO] 10.244.0.3:60500 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127251s
	[INFO] 10.244.0.3:35244 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074486s
	[INFO] 10.244.0.3:56809 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073025s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ca97a7bd7222a87e610f58087561023d776edcd8cbb43a5a5b9c57657b895ccf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36762 - 20673 "HINFO IN 3927945517368221176.1344637245483628756. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026407803s
	
	
	==> describe nodes <==
	Name:               multinode-552402
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-552402
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=multinode-552402
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_25T16_23_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 16:23:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-552402
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:33:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 16:29:35 +0000   Tue, 25 Jun 2024 16:23:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 16:29:35 +0000   Tue, 25 Jun 2024 16:23:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 16:29:35 +0000   Tue, 25 Jun 2024 16:23:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 16:29:35 +0000   Tue, 25 Jun 2024 16:23:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    multinode-552402
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6025096499d94c939deba1c860e7c4b7
	  System UUID:                60250964-99d9-4c93-9deb-a1c860e7c4b7
	  Boot ID:                    108b3034-f86c-45ec-b474-7e364c281e50
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-97579                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m49s
	  kube-system                 coredns-7db6d8ff4d-jf2ds                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m36s
	  kube-system                 etcd-multinode-552402                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m50s
	  kube-system                 kindnet-6ctrk                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m37s
	  kube-system                 kube-apiserver-multinode-552402             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 kube-controller-manager-multinode-552402    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 kube-proxy-nphd7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m37s
	  kube-system                 kube-scheduler-multinode-552402             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m35s                  kube-proxy       
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m56s (x8 over 9m56s)  kubelet          Node multinode-552402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m56s (x8 over 9m56s)  kubelet          Node multinode-552402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m56s (x7 over 9m56s)  kubelet          Node multinode-552402 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m50s                  kubelet          Node multinode-552402 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  9m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m50s                  kubelet          Node multinode-552402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m50s                  kubelet          Node multinode-552402 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m50s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m37s                  node-controller  Node multinode-552402 event: Registered Node multinode-552402 in Controller
	  Normal  NodeReady                9m34s                  kubelet          Node multinode-552402 status is now: NodeReady
	  Normal  Starting                 3m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m47s (x8 over 3m47s)  kubelet          Node multinode-552402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m47s (x8 over 3m47s)  kubelet          Node multinode-552402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m47s (x7 over 3m47s)  kubelet          Node multinode-552402 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m31s                  node-controller  Node multinode-552402 event: Registered Node multinode-552402 in Controller
	
	
	Name:               multinode-552402-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-552402-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=multinode-552402
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_25T16_30_14_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 16:30:14 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-552402-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:30:55 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 25 Jun 2024 16:30:44 +0000   Tue, 25 Jun 2024 16:31:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 25 Jun 2024 16:30:44 +0000   Tue, 25 Jun 2024 16:31:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 25 Jun 2024 16:30:44 +0000   Tue, 25 Jun 2024 16:31:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 25 Jun 2024 16:30:44 +0000   Tue, 25 Jun 2024 16:31:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.166
	  Hostname:    multinode-552402-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8493e28109fb4b659052d6c402f92bc8
	  System UUID:                8493e281-09fb-4b65-9052-d6c402f92bc8
	  Boot ID:                    4fc2f8d8-df56-4daa-875b-0a9e67c6fe47
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vdl68    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  kube-system                 kindnet-djmlv              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m1s
	  kube-system                 kube-proxy-774kb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m                   kube-proxy       
	  Normal  Starting                 8m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9m1s (x2 over 9m1s)  kubelet          Node multinode-552402-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m1s (x2 over 9m1s)  kubelet          Node multinode-552402-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m1s (x2 over 9m1s)  kubelet          Node multinode-552402-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m52s                kubelet          Node multinode-552402-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m4s (x2 over 3m4s)  kubelet          Node multinode-552402-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x2 over 3m4s)  kubelet          Node multinode-552402-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x2 over 3m4s)  kubelet          Node multinode-552402-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m55s                kubelet          Node multinode-552402-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                 node-controller  Node multinode-552402-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +10.618814] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.056074] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075403] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.192282] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.120152] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.267734] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.040799] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.984264] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.061794] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.990240] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.074452] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.414532] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.349588] systemd-fstab-generator[1567]: Ignoring "noauto" option for root device
	[Jun25 16:24] kauditd_printk_skb: 84 callbacks suppressed
	[Jun25 16:29] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.152235] systemd-fstab-generator[2790]: Ignoring "noauto" option for root device
	[  +0.159576] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.144894] systemd-fstab-generator[2816]: Ignoring "noauto" option for root device
	[  +0.273293] systemd-fstab-generator[2844]: Ignoring "noauto" option for root device
	[  +2.513212] systemd-fstab-generator[2948]: Ignoring "noauto" option for root device
	[  +2.017196] systemd-fstab-generator[3072]: Ignoring "noauto" option for root device
	[  +0.078915] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.019523] kauditd_printk_skb: 87 callbacks suppressed
	[ +13.511272] systemd-fstab-generator[3883]: Ignoring "noauto" option for root device
	[Jun25 16:30] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [74a9d37ff49363320821cbe35e106f17871f1049d961ffc41b0531aeccfc735f] <==
	{"level":"info","ts":"2024-06-25T16:24:17.616265Z","caller":"traceutil/trace.go:171","msg":"trace[912443765] range","detail":"{range_begin:/registry/minions/multinode-552402-m02; range_end:; response_count:0; response_revision:487; }","duration":"247.931513ms","start":"2024-06-25T16:24:17.368326Z","end":"2024-06-25T16:24:17.616258Z","steps":["trace[912443765] 'agreement among raft nodes before linearized reading'  (duration: 247.790258ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-25T16:24:17.618213Z","caller":"traceutil/trace.go:171","msg":"trace[2080603250] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"199.44407ms","start":"2024-06-25T16:24:17.418759Z","end":"2024-06-25T16:24:17.618203Z","steps":["trace[2080603250] 'process raft request'  (duration: 199.270329ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-25T16:25:00.434392Z","caller":"traceutil/trace.go:171","msg":"trace[1383212026] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"240.395203ms","start":"2024-06-25T16:25:00.193944Z","end":"2024-06-25T16:25:00.434339Z","steps":["trace[1383212026] 'process raft request'  (duration: 239.171465ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-25T16:25:00.434633Z","caller":"traceutil/trace.go:171","msg":"trace[1746066410] linearizableReadLoop","detail":"{readStateIndex:645; appliedIndex:643; }","duration":"143.774553ms","start":"2024-06-25T16:25:00.290832Z","end":"2024-06-25T16:25:00.434607Z","steps":["trace[1746066410] 'read index received'  (duration: 142.291698ms)","trace[1746066410] 'applied index is now lower than readState.Index'  (duration: 1.482442ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-25T16:25:00.434845Z","caller":"traceutil/trace.go:171","msg":"trace[1541619421] transaction","detail":"{read_only:false; response_revision:614; number_of_response:1; }","duration":"173.795813ms","start":"2024-06-25T16:25:00.26104Z","end":"2024-06-25T16:25:00.434836Z","steps":["trace[1541619421] 'process raft request'  (duration: 173.513905ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:25:00.435134Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.259219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-25T16:25:00.435196Z","caller":"traceutil/trace.go:171","msg":"trace[1513403294] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:614; }","duration":"144.374051ms","start":"2024-06-25T16:25:00.290811Z","end":"2024-06-25T16:25:00.435185Z","steps":["trace[1513403294] 'agreement among raft nodes before linearized reading'  (duration: 144.240839ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:25:00.435133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.282524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-552402-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-06-25T16:25:00.435341Z","caller":"traceutil/trace.go:171","msg":"trace[1022535190] range","detail":"{range_begin:/registry/minions/multinode-552402-m03; range_end:; response_count:1; response_revision:614; }","duration":"110.527043ms","start":"2024-06-25T16:25:00.324804Z","end":"2024-06-25T16:25:00.435332Z","steps":["trace[1022535190] 'agreement among raft nodes before linearized reading'  (duration: 110.260786ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-25T16:25:10.914592Z","caller":"traceutil/trace.go:171","msg":"trace[1545405513] transaction","detail":"{read_only:false; response_revision:666; number_of_response:1; }","duration":"112.790632ms","start":"2024-06-25T16:25:10.801785Z","end":"2024-06-25T16:25:10.914576Z","steps":["trace[1545405513] 'process raft request'  (duration: 112.705959ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:25:11.098219Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.430779ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3062606420781749424 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-552402\" mod_revision:635 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-552402\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-552402\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-25T16:25:11.098353Z","caller":"traceutil/trace.go:171","msg":"trace[1466286931] linearizableReadLoop","detail":"{readStateIndex:704; appliedIndex:703; }","duration":"223.490887ms","start":"2024-06-25T16:25:10.874852Z","end":"2024-06-25T16:25:11.098343Z","steps":["trace[1466286931] 'read index received'  (duration: 40.200697ms)","trace[1466286931] 'applied index is now lower than readState.Index'  (duration: 183.289084ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-25T16:25:11.098401Z","caller":"traceutil/trace.go:171","msg":"trace[1623922951] transaction","detail":"{read_only:false; response_revision:667; number_of_response:1; }","duration":"291.279186ms","start":"2024-06-25T16:25:10.807106Z","end":"2024-06-25T16:25:11.098385Z","steps":["trace[1623922951] 'process raft request'  (duration: 161.282799ms)","trace[1623922951] 'compare'  (duration: 129.370345ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-25T16:25:11.09852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.661211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-552402-m02\" ","response":"range_response_count:1 size:3935"}
	{"level":"info","ts":"2024-06-25T16:25:11.098559Z","caller":"traceutil/trace.go:171","msg":"trace[813164059] range","detail":"{range_begin:/registry/minions/multinode-552402-m02; range_end:; response_count:1; response_revision:667; }","duration":"223.723279ms","start":"2024-06-25T16:25:10.874829Z","end":"2024-06-25T16:25:11.098552Z","steps":["trace[813164059] 'agreement among raft nodes before linearized reading'  (duration: 223.56643ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-25T16:27:54.400673Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-25T16:27:54.400797Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-552402","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.231:2380"],"advertise-client-urls":["https://192.168.39.231:2379"]}
	{"level":"warn","ts":"2024-06-25T16:27:54.400887Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.231:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-25T16:27:54.400923Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.231:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-25T16:27:54.40106Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-25T16:27:54.401124Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-25T16:27:54.435399Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6a82bbfd8eee2a80","current-leader-member-id":"6a82bbfd8eee2a80"}
	{"level":"info","ts":"2024-06-25T16:27:54.44147Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.231:2380"}
	{"level":"info","ts":"2024-06-25T16:27:54.44158Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.231:2380"}
	{"level":"info","ts":"2024-06-25T16:27:54.441592Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-552402","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.231:2380"],"advertise-client-urls":["https://192.168.39.231:2379"]}
	
	
	==> etcd [ea4bf60afb2ebac2743296dbe43222df97b74f259f9aa5564423d6b35335f325] <==
	{"level":"info","ts":"2024-06-25T16:29:32.383313Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-25T16:29:32.383365Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-25T16:29:32.383381Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-25T16:29:32.383724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 switched to configuration voters=(7674903412691839616)"}
	{"level":"info","ts":"2024-06-25T16:29:32.383798Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1a20717615099fdd","local-member-id":"6a82bbfd8eee2a80","added-peer-id":"6a82bbfd8eee2a80","added-peer-peer-urls":["https://192.168.39.231:2380"]}
	{"level":"info","ts":"2024-06-25T16:29:32.384019Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1a20717615099fdd","local-member-id":"6a82bbfd8eee2a80","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-25T16:29:32.384061Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-25T16:29:32.390025Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.231:2380"}
	{"level":"info","ts":"2024-06-25T16:29:32.39006Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.231:2380"}
	{"level":"info","ts":"2024-06-25T16:29:32.390278Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6a82bbfd8eee2a80","initial-advertise-peer-urls":["https://192.168.39.231:2380"],"listen-peer-urls":["https://192.168.39.231:2380"],"advertise-client-urls":["https://192.168.39.231:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.231:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-25T16:29:32.390326Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-25T16:29:33.724862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-25T16:29:33.724922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-25T16:29:33.725023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 received MsgPreVoteResp from 6a82bbfd8eee2a80 at term 2"}
	{"level":"info","ts":"2024-06-25T16:29:33.725042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 became candidate at term 3"}
	{"level":"info","ts":"2024-06-25T16:29:33.725047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 received MsgVoteResp from 6a82bbfd8eee2a80 at term 3"}
	{"level":"info","ts":"2024-06-25T16:29:33.725056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 became leader at term 3"}
	{"level":"info","ts":"2024-06-25T16:29:33.725067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6a82bbfd8eee2a80 elected leader 6a82bbfd8eee2a80 at term 3"}
	{"level":"info","ts":"2024-06-25T16:29:33.729471Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6a82bbfd8eee2a80","local-member-attributes":"{Name:multinode-552402 ClientURLs:[https://192.168.39.231:2379]}","request-path":"/0/members/6a82bbfd8eee2a80/attributes","cluster-id":"1a20717615099fdd","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-25T16:29:33.72964Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-25T16:29:33.729659Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-25T16:29:33.729874Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-25T16:29:33.729905Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-25T16:29:33.731862Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.231:2379"}
	{"level":"info","ts":"2024-06-25T16:29:33.733489Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 16:33:18 up 10 min,  0 users,  load average: 0.48, 0.36, 0.17
	Linux multinode-552402 5.10.207 #1 SMP Mon Jun 24 21:03:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [40d59741ee01b451097aa7966de7d23a2e74c39a2622e3cc802154ffc4dd4c53] <==
	I0625 16:32:16.799915       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:32:26.813294       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:32:26.813333       1 main.go:227] handling current node
	I0625 16:32:26.813343       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:32:26.813348       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:32:36.836289       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:32:36.836513       1 main.go:227] handling current node
	I0625 16:32:36.836586       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:32:36.836629       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:32:46.847108       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:32:46.847259       1 main.go:227] handling current node
	I0625 16:32:46.847290       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:32:46.847309       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:32:56.860289       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:32:56.860328       1 main.go:227] handling current node
	I0625 16:32:56.860347       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:32:56.860352       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:33:06.872023       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:33:06.872104       1 main.go:227] handling current node
	I0625 16:33:06.872128       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:33:06.872145       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:33:16.880511       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:33:16.880589       1 main.go:227] handling current node
	I0625 16:33:16.880613       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:33:16.880630       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [dada3d77ec88472cd180091e075101888927a1b93a58d88bd7378fbe100d3045] <==
	I0625 16:27:13.955886       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.3.0/24] 
	I0625 16:27:23.963476       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:27:23.963564       1 main.go:227] handling current node
	I0625 16:27:23.963588       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:27:23.963604       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:27:23.963720       1 main.go:223] Handling node with IPs: map[192.168.39.177:{}]
	I0625 16:27:23.963744       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.3.0/24] 
	I0625 16:27:33.976024       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:27:33.976214       1 main.go:227] handling current node
	I0625 16:27:33.976260       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:27:33.976279       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:27:33.976402       1 main.go:223] Handling node with IPs: map[192.168.39.177:{}]
	I0625 16:27:33.976457       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.3.0/24] 
	I0625 16:27:43.988705       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:27:43.988807       1 main.go:227] handling current node
	I0625 16:27:43.988836       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:27:43.988853       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:27:43.989038       1 main.go:223] Handling node with IPs: map[192.168.39.177:{}]
	I0625 16:27:43.989084       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.3.0/24] 
	I0625 16:27:54.002507       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0625 16:27:54.002543       1 main.go:227] handling current node
	I0625 16:27:54.002559       1 main.go:223] Handling node with IPs: map[192.168.39.166:{}]
	I0625 16:27:54.002564       1 main.go:250] Node multinode-552402-m02 has CIDR [10.244.1.0/24] 
	I0625 16:27:54.002656       1 main.go:223] Handling node with IPs: map[192.168.39.177:{}]
	I0625 16:27:54.002661       1 main.go:250] Node multinode-552402-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9bf530af755b58043cd84f310c01986cfe5f2a354d4e6102e40d465ec3a96a81] <==
	I0625 16:29:35.018767       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0625 16:29:35.020931       1 aggregator.go:165] initial CRD sync complete...
	I0625 16:29:35.021017       1 autoregister_controller.go:141] Starting autoregister controller
	I0625 16:29:35.021029       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0625 16:29:35.021035       1 cache.go:39] Caches are synced for autoregister controller
	I0625 16:29:35.061038       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0625 16:29:35.061073       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0625 16:29:35.061407       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0625 16:29:35.065081       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0625 16:29:35.065132       1 policy_source.go:224] refreshing policies
	I0625 16:29:35.065829       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0625 16:29:35.073580       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0625 16:29:35.074021       1 shared_informer.go:320] Caches are synced for configmaps
	I0625 16:29:35.089457       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0625 16:29:35.121940       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0625 16:29:35.146950       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0625 16:29:35.164894       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0625 16:29:35.975895       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0625 16:29:36.774791       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0625 16:29:36.893564       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0625 16:29:36.907475       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0625 16:29:36.964669       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0625 16:29:36.970780       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0625 16:29:47.902120       1 controller.go:615] quota admission added evaluator for: endpoints
	I0625 16:29:47.955810       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [bd920691a329ba6c3778d2ce3bfd1a1d43b9b4ecd0e0ebe6a6dc63bdfbbe887d] <==
	W0625 16:27:54.430415       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.430488       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.430519       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.430551       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.430576       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.430627       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.430652       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431180       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431481       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431523       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431547       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431575       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431587       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431604       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431629       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431655       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431657       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431686       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431701       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431714       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431729       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431746       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431758       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431781       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:27:54.431794       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [79cd6519b497f35ff1e9ac8c6377ada466699c880f80fd08e64500e8964072a8] <==
	I0625 16:23:46.480070       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0625 16:24:17.626020       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-552402-m02\" does not exist"
	I0625 16:24:17.653783       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-552402-m02" podCIDRs=["10.244.1.0/24"]
	I0625 16:24:21.486130       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-552402-m02"
	I0625 16:24:26.877681       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:24:29.419762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.05429ms"
	I0625 16:24:29.428350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.26433ms"
	I0625 16:24:29.428627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.505µs"
	I0625 16:24:32.663124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.68071ms"
	I0625 16:24:32.663213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.295µs"
	I0625 16:24:33.428241       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.19328ms"
	I0625 16:24:33.428337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.288µs"
	I0625 16:25:00.439645       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-552402-m03\" does not exist"
	I0625 16:25:00.439867       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:25:00.449284       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-552402-m03" podCIDRs=["10.244.2.0/24"]
	I0625 16:25:01.505149       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-552402-m03"
	I0625 16:25:10.262164       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:25:39.022904       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:25:39.958891       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-552402-m03\" does not exist"
	I0625 16:25:39.959451       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:25:39.970211       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-552402-m03" podCIDRs=["10.244.3.0/24"]
	I0625 16:25:48.885683       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:26:31.555902       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:26:31.621517       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.339401ms"
	I0625 16:26:31.622860       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.066µs"
	
	
	==> kube-controller-manager [90c383a4dc6b4ce55513c013b99411811ae775392a7c5c2ecd9c50299edf98bf] <==
	I0625 16:30:14.180866       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-552402-m02" podCIDRs=["10.244.1.0/24"]
	I0625 16:30:16.055779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.071µs"
	I0625 16:30:16.092618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.188µs"
	I0625 16:30:16.104077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.551µs"
	I0625 16:30:16.112620       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.199µs"
	I0625 16:30:16.120390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.087µs"
	I0625 16:30:16.123637       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.565µs"
	I0625 16:30:18.042106       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.698µs"
	I0625 16:30:23.123509       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:30:23.139670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.444µs"
	I0625 16:30:23.156326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.464µs"
	I0625 16:30:26.382716       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.080468ms"
	I0625 16:30:26.383688       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.892µs"
	I0625 16:30:41.689541       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:30:43.155783       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:30:43.156011       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-552402-m03\" does not exist"
	I0625 16:30:43.168867       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-552402-m03" podCIDRs=["10.244.2.0/24"]
	I0625 16:30:52.002473       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:30:57.222234       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-552402-m02"
	I0625 16:31:37.818165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.790333ms"
	I0625 16:31:37.818940       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.937µs"
	I0625 16:31:47.726760       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-h2txx"
	I0625 16:31:47.779806       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-h2txx"
	I0625 16:31:47.779852       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-pr9ph"
	I0625 16:31:47.821151       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-pr9ph"
	
	
	==> kube-proxy [8647b618ee7b9b7796293cfccaaa79f29452b9ad19f19bd4bf4f5371f911f3ad] <==
	I0625 16:29:36.066785       1 server_linux.go:69] "Using iptables proxy"
	I0625 16:29:36.078098       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.231"]
	I0625 16:29:36.129029       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0625 16:29:36.129083       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0625 16:29:36.129099       1 server_linux.go:165] "Using iptables Proxier"
	I0625 16:29:36.133564       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0625 16:29:36.133837       1 server.go:872] "Version info" version="v1.30.2"
	I0625 16:29:36.133865       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:29:36.136099       1 config.go:192] "Starting service config controller"
	I0625 16:29:36.136388       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0625 16:29:36.137069       1 config.go:101] "Starting endpoint slice config controller"
	I0625 16:29:36.137183       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0625 16:29:36.139928       1 config.go:319] "Starting node config controller"
	I0625 16:29:36.140058       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0625 16:29:36.237864       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0625 16:29:36.238069       1 shared_informer.go:320] Caches are synced for service config
	I0625 16:29:36.241789       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f74159477c5e02e00dfb27d653217dc9b2d7693cee6730c6af252cf01c5572db] <==
	I0625 16:23:43.026302       1 server_linux.go:69] "Using iptables proxy"
	I0625 16:23:43.038183       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.231"]
	I0625 16:23:43.134484       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0625 16:23:43.134549       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0625 16:23:43.134570       1 server_linux.go:165] "Using iptables Proxier"
	I0625 16:23:43.145931       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0625 16:23:43.146206       1 server.go:872] "Version info" version="v1.30.2"
	I0625 16:23:43.146234       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:23:43.148296       1 config.go:192] "Starting service config controller"
	I0625 16:23:43.148328       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0625 16:23:43.148381       1 config.go:101] "Starting endpoint slice config controller"
	I0625 16:23:43.148386       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0625 16:23:43.149027       1 config.go:319] "Starting node config controller"
	I0625 16:23:43.149053       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0625 16:23:43.248424       1 shared_informer.go:320] Caches are synced for service config
	I0625 16:23:43.248456       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0625 16:23:43.249102       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3410e4d3e33976711626b2e78ed9f2c95d4fab7ae14ffb4db21293db4b1d5d00] <==
	W0625 16:29:35.055856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0625 16:29:35.055923       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0625 16:29:35.056052       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0625 16:29:35.056158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0625 16:29:35.056273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0625 16:29:35.056359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0625 16:29:35.056492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0625 16:29:35.056590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0625 16:29:35.056724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0625 16:29:35.056812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0625 16:29:35.056926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0625 16:29:35.057129       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0625 16:29:35.057159       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0625 16:29:35.057233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0625 16:29:35.059194       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0625 16:29:35.059289       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0625 16:29:35.059416       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0625 16:29:35.059513       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0625 16:29:35.059639       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0625 16:29:35.059737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0625 16:29:35.059861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0625 16:29:35.059929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0625 16:29:35.060144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0625 16:29:35.060231       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0625 16:29:36.542191       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [56b7ee056128dc759220644aa7dc88d47b282cf6f68c6ce88244ec9bef2de09c] <==
	E0625 16:23:25.808420       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0625 16:23:25.807505       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0625 16:23:25.808466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0625 16:23:25.807551       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0625 16:23:25.808512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0625 16:23:26.622021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0625 16:23:26.622125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0625 16:23:26.652129       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0625 16:23:26.652254       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0625 16:23:26.664330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0625 16:23:26.664430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0625 16:23:26.725657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0625 16:23:26.725776       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0625 16:23:26.768567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0625 16:23:26.768608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0625 16:23:26.831237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0625 16:23:26.831283       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0625 16:23:26.869782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0625 16:23:26.869914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0625 16:23:26.925126       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0625 16:23:26.925204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0625 16:23:26.986353       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0625 16:23:26.986728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0625 16:23:28.703349       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0625 16:27:54.398772       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.169214    3079 topology_manager.go:215] "Topology Admit Handler" podUID="638c610e-b5c1-40b3-8972-fbf36c6f1bf0" podNamespace="kube-system" podName="storage-provisioner"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.169314    3079 topology_manager.go:215] "Topology Admit Handler" podUID="d15691ff-e95d-426b-9545-344419479d75" podNamespace="default" podName="busybox-fc5497c4f-97579"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.169945    3079 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.208947    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2de38f2c-e56d-43ca-acd6-537a2c8c36c9-lib-modules\") pod \"kindnet-6ctrk\" (UID: \"2de38f2c-e56d-43ca-acd6-537a2c8c36c9\") " pod="kube-system/kindnet-6ctrk"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.209120    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2de38f2c-e56d-43ca-acd6-537a2c8c36c9-xtables-lock\") pod \"kindnet-6ctrk\" (UID: \"2de38f2c-e56d-43ca-acd6-537a2c8c36c9\") " pod="kube-system/kindnet-6ctrk"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.209156    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea3247d1-08d6-4760-8ba1-62cd6d3b7edb-lib-modules\") pod \"kube-proxy-nphd7\" (UID: \"ea3247d1-08d6-4760-8ba1-62cd6d3b7edb\") " pod="kube-system/kube-proxy-nphd7"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.209217    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/638c610e-b5c1-40b3-8972-fbf36c6f1bf0-tmp\") pod \"storage-provisioner\" (UID: \"638c610e-b5c1-40b3-8972-fbf36c6f1bf0\") " pod="kube-system/storage-provisioner"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.209302    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea3247d1-08d6-4760-8ba1-62cd6d3b7edb-xtables-lock\") pod \"kube-proxy-nphd7\" (UID: \"ea3247d1-08d6-4760-8ba1-62cd6d3b7edb\") " pod="kube-system/kube-proxy-nphd7"
	Jun 25 16:29:35 multinode-552402 kubelet[3079]: I0625 16:29:35.209338    3079 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2de38f2c-e56d-43ca-acd6-537a2c8c36c9-cni-cfg\") pod \"kindnet-6ctrk\" (UID: \"2de38f2c-e56d-43ca-acd6-537a2c8c36c9\") " pod="kube-system/kindnet-6ctrk"
	Jun 25 16:29:41 multinode-552402 kubelet[3079]: I0625 16:29:41.081871    3079 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 25 16:30:31 multinode-552402 kubelet[3079]: E0625 16:30:31.264276    3079 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 16:30:31 multinode-552402 kubelet[3079]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:30:31 multinode-552402 kubelet[3079]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:30:31 multinode-552402 kubelet[3079]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:30:31 multinode-552402 kubelet[3079]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 25 16:31:31 multinode-552402 kubelet[3079]: E0625 16:31:31.257494    3079 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 16:31:31 multinode-552402 kubelet[3079]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:31:31 multinode-552402 kubelet[3079]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:31:31 multinode-552402 kubelet[3079]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:31:31 multinode-552402 kubelet[3079]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 25 16:32:31 multinode-552402 kubelet[3079]: E0625 16:32:31.261514    3079 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 25 16:32:31 multinode-552402 kubelet[3079]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 25 16:32:31 multinode-552402 kubelet[3079]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 25 16:32:31 multinode-552402 kubelet[3079]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 25 16:32:31 multinode-552402 kubelet[3079]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0625 16:33:17.918267   56044 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19128-13846/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-552402 -n multinode-552402
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-552402 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.15s)

                                                
                                    
x
+
TestPreload (353.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-684300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0625 16:39:29.129655   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-684300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m30.571599693s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-684300 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-684300 image pull gcr.io/k8s-minikube/busybox: (2.868459875s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-684300
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-684300: exit status 82 (2m0.445766245s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-684300"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-684300 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-06-25 16:42:49.499875348 +0000 UTC m=+5605.644909414
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-684300 -n test-preload-684300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-684300 -n test-preload-684300: exit status 3 (18.426246937s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0625 16:43:07.922787   59238 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.81:22: connect: no route to host
	E0625 16:43:07.922810   59238 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.81:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-684300" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-684300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-684300
--- FAIL: TestPreload (353.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (404.13s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497568 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-497568 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m31.813642504s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-497568] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-497568" primary control-plane node in "kubernetes-upgrade-497568" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:44:59.301451   60307 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:44:59.301736   60307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:44:59.301747   60307 out.go:304] Setting ErrFile to fd 2...
	I0625 16:44:59.301752   60307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:44:59.301945   60307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:44:59.302524   60307 out.go:298] Setting JSON to false
	I0625 16:44:59.303503   60307 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8843,"bootTime":1719325056,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 16:44:59.303561   60307 start.go:139] virtualization: kvm guest
	I0625 16:44:59.306106   60307 out.go:177] * [kubernetes-upgrade-497568] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0625 16:44:59.307801   60307 notify.go:220] Checking for updates...
	I0625 16:44:59.309053   60307 out.go:177]   - MINIKUBE_LOCATION=19128
	I0625 16:44:59.311253   60307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 16:44:59.312943   60307 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 16:44:59.314364   60307 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:44:59.316099   60307 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0625 16:44:59.318576   60307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0625 16:44:59.320225   60307 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 16:44:59.359434   60307 out.go:177] * Using the kvm2 driver based on user configuration
	I0625 16:44:59.361054   60307 start.go:297] selected driver: kvm2
	I0625 16:44:59.361067   60307 start.go:901] validating driver "kvm2" against <nil>
	I0625 16:44:59.361077   60307 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0625 16:44:59.361746   60307 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:44:59.361803   60307 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19128-13846/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0625 16:44:59.377746   60307 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0625 16:44:59.377807   60307 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0625 16:44:59.378078   60307 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0625 16:44:59.378136   60307 cni.go:84] Creating CNI manager for ""
	I0625 16:44:59.378153   60307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 16:44:59.378163   60307 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0625 16:44:59.378246   60307 start.go:340] cluster config:
	{Name:kubernetes-upgrade-497568 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-497568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:44:59.378340   60307 iso.go:125] acquiring lock: {Name:mk76df652d5e768afc73443035d5ecb8b75ed16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:44:59.379899   60307 out.go:177] * Starting "kubernetes-upgrade-497568" primary control-plane node in "kubernetes-upgrade-497568" cluster
	I0625 16:44:59.381074   60307 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0625 16:44:59.381110   60307 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0625 16:44:59.381122   60307 cache.go:56] Caching tarball of preloaded images
	I0625 16:44:59.381221   60307 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 16:44:59.381236   60307 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0625 16:44:59.381568   60307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/config.json ...
	I0625 16:44:59.381592   60307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/config.json: {Name:mk0389ff075d3725200cbae0061318fe995a1b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:44:59.381746   60307 start.go:360] acquireMachinesLock for kubernetes-upgrade-497568: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 16:44:59.381791   60307 start.go:364] duration metric: took 19.91µs to acquireMachinesLock for "kubernetes-upgrade-497568"
	I0625 16:44:59.381810   60307 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-497568 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-497568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 16:44:59.381886   60307 start.go:125] createHost starting for "" (driver="kvm2")
	I0625 16:44:59.384242   60307 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0625 16:44:59.384382   60307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:44:59.384422   60307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:44:59.399257   60307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45711
	I0625 16:44:59.399762   60307 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:44:59.400363   60307 main.go:141] libmachine: Using API Version  1
	I0625 16:44:59.400382   60307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:44:59.400698   60307 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:44:59.400884   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetMachineName
	I0625 16:44:59.401035   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .DriverName
	I0625 16:44:59.401192   60307 start.go:159] libmachine.API.Create for "kubernetes-upgrade-497568" (driver="kvm2")
	I0625 16:44:59.401217   60307 client.go:168] LocalClient.Create starting
	I0625 16:44:59.401244   60307 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem
	I0625 16:44:59.401271   60307 main.go:141] libmachine: Decoding PEM data...
	I0625 16:44:59.401281   60307 main.go:141] libmachine: Parsing certificate...
	I0625 16:44:59.401323   60307 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem
	I0625 16:44:59.401343   60307 main.go:141] libmachine: Decoding PEM data...
	I0625 16:44:59.401351   60307 main.go:141] libmachine: Parsing certificate...
	I0625 16:44:59.401371   60307 main.go:141] libmachine: Running pre-create checks...
	I0625 16:44:59.401377   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .PreCreateCheck
	I0625 16:44:59.401861   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetConfigRaw
	I0625 16:44:59.402244   60307 main.go:141] libmachine: Creating machine...
	I0625 16:44:59.402259   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .Create
	I0625 16:44:59.402385   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Creating KVM machine...
	I0625 16:44:59.403540   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found existing default KVM network
	I0625 16:44:59.404325   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:44:59.404174   60383 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1c0}
	I0625 16:44:59.404351   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | created network xml: 
	I0625 16:44:59.404362   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | <network>
	I0625 16:44:59.404372   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG |   <name>mk-kubernetes-upgrade-497568</name>
	I0625 16:44:59.404387   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG |   <dns enable='no'/>
	I0625 16:44:59.404397   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG |   
	I0625 16:44:59.404407   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0625 16:44:59.404417   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG |     <dhcp>
	I0625 16:44:59.404482   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0625 16:44:59.404503   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG |     </dhcp>
	I0625 16:44:59.404521   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG |   </ip>
	I0625 16:44:59.404533   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG |   
	I0625 16:44:59.404545   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | </network>
	I0625 16:44:59.404555   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | 
	I0625 16:44:59.409159   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | trying to create private KVM network mk-kubernetes-upgrade-497568 192.168.39.0/24...
	I0625 16:44:59.474455   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | private KVM network mk-kubernetes-upgrade-497568 192.168.39.0/24 created
	I0625 16:44:59.474503   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Setting up store path in /home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568 ...
	I0625 16:44:59.474534   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Building disk image from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso
	I0625 16:44:59.474548   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:44:59.474435   60383 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:44:59.474719   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Downloading /home/jenkins/minikube-integration/19128-13846/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso...
	I0625 16:44:59.707311   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:44:59.707159   60383 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568/id_rsa...
	I0625 16:44:59.761647   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:44:59.761515   60383 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568/kubernetes-upgrade-497568.rawdisk...
	I0625 16:44:59.761676   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Writing magic tar header
	I0625 16:44:59.761690   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Writing SSH key tar header
	I0625 16:44:59.761699   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:44:59.761635   60383 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568 ...
	I0625 16:44:59.761747   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568
	I0625 16:44:59.761764   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines
	I0625 16:44:59.761778   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568 (perms=drwx------)
	I0625 16:44:59.761791   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines (perms=drwxr-xr-x)
	I0625 16:44:59.761803   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube (perms=drwxr-xr-x)
	I0625 16:44:59.761812   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846 (perms=drwxrwxr-x)
	I0625 16:44:59.761819   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0625 16:44:59.761847   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0625 16:44:59.761865   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Creating domain...
	I0625 16:44:59.761880   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:44:59.761896   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846
	I0625 16:44:59.761907   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0625 16:44:59.761917   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Checking permissions on dir: /home/jenkins
	I0625 16:44:59.761924   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Checking permissions on dir: /home
	I0625 16:44:59.761932   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Skipping /home - not owner
	I0625 16:44:59.763078   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) define libvirt domain using xml: 
	I0625 16:44:59.763100   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) <domain type='kvm'>
	I0625 16:44:59.763112   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)   <name>kubernetes-upgrade-497568</name>
	I0625 16:44:59.763120   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)   <memory unit='MiB'>2200</memory>
	I0625 16:44:59.763130   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)   <vcpu>2</vcpu>
	I0625 16:44:59.763140   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)   <features>
	I0625 16:44:59.763149   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     <acpi/>
	I0625 16:44:59.763153   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     <apic/>
	I0625 16:44:59.763159   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     <pae/>
	I0625 16:44:59.763173   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     
	I0625 16:44:59.763198   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)   </features>
	I0625 16:44:59.763222   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)   <cpu mode='host-passthrough'>
	I0625 16:44:59.763235   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)   
	I0625 16:44:59.763242   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)   </cpu>
	I0625 16:44:59.763255   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)   <os>
	I0625 16:44:59.763265   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     <type>hvm</type>
	I0625 16:44:59.763277   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     <boot dev='cdrom'/>
	I0625 16:44:59.763289   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     <boot dev='hd'/>
	I0625 16:44:59.763297   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     <bootmenu enable='no'/>
	I0625 16:44:59.763310   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)   </os>
	I0625 16:44:59.763339   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)   <devices>
	I0625 16:44:59.763359   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     <disk type='file' device='cdrom'>
	I0625 16:44:59.763375   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568/boot2docker.iso'/>
	I0625 16:44:59.763391   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)       <target dev='hdc' bus='scsi'/>
	I0625 16:44:59.763401   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)       <readonly/>
	I0625 16:44:59.763408   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     </disk>
	I0625 16:44:59.763422   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     <disk type='file' device='disk'>
	I0625 16:44:59.763436   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0625 16:44:59.763445   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568/kubernetes-upgrade-497568.rawdisk'/>
	I0625 16:44:59.763455   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)       <target dev='hda' bus='virtio'/>
	I0625 16:44:59.763469   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     </disk>
	I0625 16:44:59.763480   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     <interface type='network'>
	I0625 16:44:59.763494   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)       <source network='mk-kubernetes-upgrade-497568'/>
	I0625 16:44:59.763509   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)       <model type='virtio'/>
	I0625 16:44:59.763517   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     </interface>
	I0625 16:44:59.763529   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     <interface type='network'>
	I0625 16:44:59.763547   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)       <source network='default'/>
	I0625 16:44:59.763567   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)       <model type='virtio'/>
	I0625 16:44:59.763586   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     </interface>
	I0625 16:44:59.763599   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     <serial type='pty'>
	I0625 16:44:59.763617   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)       <target port='0'/>
	I0625 16:44:59.763637   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     </serial>
	I0625 16:44:59.763650   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     <console type='pty'>
	I0625 16:44:59.763662   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)       <target type='serial' port='0'/>
	I0625 16:44:59.763674   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     </console>
	I0625 16:44:59.763686   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     <rng model='virtio'>
	I0625 16:44:59.763702   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)       <backend model='random'>/dev/random</backend>
	I0625 16:44:59.763717   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     </rng>
	I0625 16:44:59.763729   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     
	I0625 16:44:59.763739   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)     
	I0625 16:44:59.763750   60307 main.go:141] libmachine: (kubernetes-upgrade-497568)   </devices>
	I0625 16:44:59.763761   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) </domain>
	I0625 16:44:59.763775   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) 
	I0625 16:44:59.767891   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:be:73:ad in network default
	I0625 16:44:59.768486   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Ensuring networks are active...
	I0625 16:44:59.768521   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:44:59.769205   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Ensuring network default is active
	I0625 16:44:59.769533   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Ensuring network mk-kubernetes-upgrade-497568 is active
	I0625 16:44:59.769968   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Getting domain xml...
	I0625 16:44:59.770652   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Creating domain...
	I0625 16:45:01.006483   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Waiting to get IP...
	I0625 16:45:01.007213   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:01.007581   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find current IP address of domain kubernetes-upgrade-497568 in network mk-kubernetes-upgrade-497568
	I0625 16:45:01.007626   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:45:01.007573   60383 retry.go:31] will retry after 281.322553ms: waiting for machine to come up
	I0625 16:45:01.290011   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:01.290541   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find current IP address of domain kubernetes-upgrade-497568 in network mk-kubernetes-upgrade-497568
	I0625 16:45:01.290590   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:45:01.290495   60383 retry.go:31] will retry after 300.703993ms: waiting for machine to come up
	I0625 16:45:01.593065   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:01.593402   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find current IP address of domain kubernetes-upgrade-497568 in network mk-kubernetes-upgrade-497568
	I0625 16:45:01.593430   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:45:01.593361   60383 retry.go:31] will retry after 441.748777ms: waiting for machine to come up
	I0625 16:45:02.036514   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:02.036950   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find current IP address of domain kubernetes-upgrade-497568 in network mk-kubernetes-upgrade-497568
	I0625 16:45:02.036973   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:45:02.036904   60383 retry.go:31] will retry after 559.625209ms: waiting for machine to come up
	I0625 16:45:02.598565   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:02.598952   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find current IP address of domain kubernetes-upgrade-497568 in network mk-kubernetes-upgrade-497568
	I0625 16:45:02.598978   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:45:02.598910   60383 retry.go:31] will retry after 603.08201ms: waiting for machine to come up
	I0625 16:45:03.203268   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:03.203674   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find current IP address of domain kubernetes-upgrade-497568 in network mk-kubernetes-upgrade-497568
	I0625 16:45:03.203701   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:45:03.203625   60383 retry.go:31] will retry after 783.440749ms: waiting for machine to come up
	I0625 16:45:03.988278   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:03.988647   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find current IP address of domain kubernetes-upgrade-497568 in network mk-kubernetes-upgrade-497568
	I0625 16:45:03.988683   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:45:03.988609   60383 retry.go:31] will retry after 830.727528ms: waiting for machine to come up
	I0625 16:45:04.820905   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:04.821305   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find current IP address of domain kubernetes-upgrade-497568 in network mk-kubernetes-upgrade-497568
	I0625 16:45:04.821332   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:45:04.821259   60383 retry.go:31] will retry after 1.082487745s: waiting for machine to come up
	I0625 16:45:05.905367   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:05.905815   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find current IP address of domain kubernetes-upgrade-497568 in network mk-kubernetes-upgrade-497568
	I0625 16:45:05.905839   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:45:05.905765   60383 retry.go:31] will retry after 1.709429822s: waiting for machine to come up
	I0625 16:45:07.617580   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:07.617965   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find current IP address of domain kubernetes-upgrade-497568 in network mk-kubernetes-upgrade-497568
	I0625 16:45:07.617987   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:45:07.617926   60383 retry.go:31] will retry after 1.718544164s: waiting for machine to come up
	I0625 16:45:09.338643   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:09.339055   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find current IP address of domain kubernetes-upgrade-497568 in network mk-kubernetes-upgrade-497568
	I0625 16:45:09.339082   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:45:09.339019   60383 retry.go:31] will retry after 2.012647877s: waiting for machine to come up
	I0625 16:45:11.352794   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:11.353244   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find current IP address of domain kubernetes-upgrade-497568 in network mk-kubernetes-upgrade-497568
	I0625 16:45:11.353273   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:45:11.353185   60383 retry.go:31] will retry after 3.044603203s: waiting for machine to come up
	I0625 16:45:14.401216   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:14.401587   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find current IP address of domain kubernetes-upgrade-497568 in network mk-kubernetes-upgrade-497568
	I0625 16:45:14.401609   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:45:14.401549   60383 retry.go:31] will retry after 4.430022532s: waiting for machine to come up
	I0625 16:45:18.836344   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:18.836745   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find current IP address of domain kubernetes-upgrade-497568 in network mk-kubernetes-upgrade-497568
	I0625 16:45:18.836770   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | I0625 16:45:18.836694   60383 retry.go:31] will retry after 5.614304177s: waiting for machine to come up
	I0625 16:45:24.452781   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:24.453217   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has current primary IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:24.453244   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Found IP for machine: 192.168.39.64
	I0625 16:45:24.453257   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Reserving static IP address...
	I0625 16:45:24.453607   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-497568", mac: "52:54:00:4f:19:9e", ip: "192.168.39.64"} in network mk-kubernetes-upgrade-497568
	I0625 16:45:24.523706   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Getting to WaitForSSH function...
	I0625 16:45:24.523741   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Reserved static IP address: 192.168.39.64
	I0625 16:45:24.523756   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Waiting for SSH to be available...
	I0625 16:45:24.526609   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:24.526998   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:24.527030   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:24.527131   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Using SSH client type: external
	I0625 16:45:24.527150   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Using SSH private key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568/id_rsa (-rw-------)
	I0625 16:45:24.527197   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0625 16:45:24.527229   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | About to run SSH command:
	I0625 16:45:24.527246   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | exit 0
	I0625 16:45:24.650528   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | SSH cmd err, output: <nil>: 
	I0625 16:45:24.650804   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) KVM machine creation complete!
	I0625 16:45:24.651187   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetConfigRaw
	I0625 16:45:24.651795   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .DriverName
	I0625 16:45:24.651984   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .DriverName
	I0625 16:45:24.652114   60307 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0625 16:45:24.652124   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetState
	I0625 16:45:24.653369   60307 main.go:141] libmachine: Detecting operating system of created instance...
	I0625 16:45:24.653383   60307 main.go:141] libmachine: Waiting for SSH to be available...
	I0625 16:45:24.653388   60307 main.go:141] libmachine: Getting to WaitForSSH function...
	I0625 16:45:24.653405   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHHostname
	I0625 16:45:24.655601   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:24.655947   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:24.655973   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:24.656095   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHPort
	I0625 16:45:24.656260   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:24.656406   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:24.656516   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHUsername
	I0625 16:45:24.656678   60307 main.go:141] libmachine: Using SSH client type: native
	I0625 16:45:24.656898   60307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0625 16:45:24.656912   60307 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0625 16:45:24.757734   60307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 16:45:24.757760   60307 main.go:141] libmachine: Detecting the provisioner...
	I0625 16:45:24.757771   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHHostname
	I0625 16:45:24.760578   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:24.760990   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:24.761028   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:24.761186   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHPort
	I0625 16:45:24.761384   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:24.761510   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:24.761619   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHUsername
	I0625 16:45:24.761798   60307 main.go:141] libmachine: Using SSH client type: native
	I0625 16:45:24.762006   60307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0625 16:45:24.762018   60307 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0625 16:45:24.863761   60307 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0625 16:45:24.863857   60307 main.go:141] libmachine: found compatible host: buildroot
	I0625 16:45:24.863870   60307 main.go:141] libmachine: Provisioning with buildroot...
	I0625 16:45:24.863883   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetMachineName
	I0625 16:45:24.864158   60307 buildroot.go:166] provisioning hostname "kubernetes-upgrade-497568"
	I0625 16:45:24.864195   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetMachineName
	I0625 16:45:24.864391   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHHostname
	I0625 16:45:24.867158   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:24.867562   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:24.867597   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:24.867743   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHPort
	I0625 16:45:24.867931   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:24.868154   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:24.868309   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHUsername
	I0625 16:45:24.868583   60307 main.go:141] libmachine: Using SSH client type: native
	I0625 16:45:24.868819   60307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0625 16:45:24.868848   60307 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-497568 && echo "kubernetes-upgrade-497568" | sudo tee /etc/hostname
	I0625 16:45:24.980912   60307 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-497568
	
	I0625 16:45:24.980936   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHHostname
	I0625 16:45:24.984406   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:24.984861   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:24.984897   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:24.985075   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHPort
	I0625 16:45:24.985298   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:24.985448   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:24.985617   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHUsername
	I0625 16:45:24.985815   60307 main.go:141] libmachine: Using SSH client type: native
	I0625 16:45:24.986038   60307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0625 16:45:24.986064   60307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-497568' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-497568/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-497568' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 16:45:25.091266   60307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 16:45:25.091308   60307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 16:45:25.091375   60307 buildroot.go:174] setting up certificates
	I0625 16:45:25.091399   60307 provision.go:84] configureAuth start
	I0625 16:45:25.091429   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetMachineName
	I0625 16:45:25.091709   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetIP
	I0625 16:45:25.094510   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.094883   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:25.094914   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.095081   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHHostname
	I0625 16:45:25.097451   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.097751   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:25.097782   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.097886   60307 provision.go:143] copyHostCerts
	I0625 16:45:25.097958   60307 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem, removing ...
	I0625 16:45:25.097968   60307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 16:45:25.098031   60307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 16:45:25.098135   60307 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem, removing ...
	I0625 16:45:25.098145   60307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 16:45:25.098169   60307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 16:45:25.098231   60307 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem, removing ...
	I0625 16:45:25.098238   60307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 16:45:25.098258   60307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 16:45:25.098315   60307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-497568 san=[127.0.0.1 192.168.39.64 kubernetes-upgrade-497568 localhost minikube]
	I0625 16:45:25.265966   60307 provision.go:177] copyRemoteCerts
	I0625 16:45:25.266016   60307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 16:45:25.266037   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHHostname
	I0625 16:45:25.269640   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.269993   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:25.270026   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.270197   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHPort
	I0625 16:45:25.270383   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:25.270552   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHUsername
	I0625 16:45:25.270679   60307 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568/id_rsa Username:docker}
	I0625 16:45:25.348562   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 16:45:25.372638   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0625 16:45:25.395766   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0625 16:45:25.418515   60307 provision.go:87] duration metric: took 327.101851ms to configureAuth
	I0625 16:45:25.418541   60307 buildroot.go:189] setting minikube options for container-runtime
	I0625 16:45:25.418769   60307 config.go:182] Loaded profile config "kubernetes-upgrade-497568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0625 16:45:25.418851   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHHostname
	I0625 16:45:25.421687   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.422078   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:25.422109   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.422247   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHPort
	I0625 16:45:25.422454   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:25.422677   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:25.422815   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHUsername
	I0625 16:45:25.423005   60307 main.go:141] libmachine: Using SSH client type: native
	I0625 16:45:25.423216   60307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0625 16:45:25.423232   60307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 16:45:25.686676   60307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 16:45:25.686708   60307 main.go:141] libmachine: Checking connection to Docker...
	I0625 16:45:25.686719   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetURL
	I0625 16:45:25.688056   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Using libvirt version 6000000
	I0625 16:45:25.690672   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.691008   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:25.691041   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.691223   60307 main.go:141] libmachine: Docker is up and running!
	I0625 16:45:25.691239   60307 main.go:141] libmachine: Reticulating splines...
	I0625 16:45:25.691246   60307 client.go:171] duration metric: took 26.29002221s to LocalClient.Create
	I0625 16:45:25.691273   60307 start.go:167] duration metric: took 26.290081345s to libmachine.API.Create "kubernetes-upgrade-497568"
	I0625 16:45:25.691286   60307 start.go:293] postStartSetup for "kubernetes-upgrade-497568" (driver="kvm2")
	I0625 16:45:25.691302   60307 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 16:45:25.691336   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .DriverName
	I0625 16:45:25.691552   60307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 16:45:25.691580   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHHostname
	I0625 16:45:25.693698   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.694009   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:25.694038   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.694196   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHPort
	I0625 16:45:25.694365   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:25.694530   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHUsername
	I0625 16:45:25.694647   60307 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568/id_rsa Username:docker}
	I0625 16:45:25.772615   60307 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 16:45:25.777140   60307 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 16:45:25.777161   60307 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 16:45:25.777220   60307 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 16:45:25.777304   60307 filesync.go:149] local asset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> 212392.pem in /etc/ssl/certs
	I0625 16:45:25.777420   60307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0625 16:45:25.787316   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:45:25.814645   60307 start.go:296] duration metric: took 123.342845ms for postStartSetup
	I0625 16:45:25.814702   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetConfigRaw
	I0625 16:45:25.815288   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetIP
	I0625 16:45:25.817716   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.818105   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:25.818134   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.818336   60307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/config.json ...
	I0625 16:45:25.818524   60307 start.go:128] duration metric: took 26.436629538s to createHost
	I0625 16:45:25.818545   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHHostname
	I0625 16:45:25.820799   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.821113   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:25.821154   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.821335   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHPort
	I0625 16:45:25.821519   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:25.821662   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:25.821809   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHUsername
	I0625 16:45:25.821952   60307 main.go:141] libmachine: Using SSH client type: native
	I0625 16:45:25.822156   60307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0625 16:45:25.822171   60307 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0625 16:45:25.919187   60307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719333925.895856728
	
	I0625 16:45:25.919214   60307 fix.go:216] guest clock: 1719333925.895856728
	I0625 16:45:25.919221   60307 fix.go:229] Guest: 2024-06-25 16:45:25.895856728 +0000 UTC Remote: 2024-06-25 16:45:25.81853506 +0000 UTC m=+26.557774988 (delta=77.321668ms)
	I0625 16:45:25.919252   60307 fix.go:200] guest clock delta is within tolerance: 77.321668ms
	I0625 16:45:25.919257   60307 start.go:83] releasing machines lock for "kubernetes-upgrade-497568", held for 26.537457672s
	I0625 16:45:25.919279   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .DriverName
	I0625 16:45:25.919587   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetIP
	I0625 16:45:25.922541   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.922924   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:25.922957   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.923135   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .DriverName
	I0625 16:45:25.923610   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .DriverName
	I0625 16:45:25.923786   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .DriverName
	I0625 16:45:25.923867   60307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 16:45:25.923918   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHHostname
	I0625 16:45:25.923992   60307 ssh_runner.go:195] Run: cat /version.json
	I0625 16:45:25.924045   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHHostname
	I0625 16:45:25.926712   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.926763   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.927143   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:25.927171   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.927200   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:25.927221   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:25.927296   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHPort
	I0625 16:45:25.927486   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:25.927491   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHPort
	I0625 16:45:25.927675   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:45:25.927691   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHUsername
	I0625 16:45:25.927831   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHUsername
	I0625 16:45:25.927849   60307 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568/id_rsa Username:docker}
	I0625 16:45:25.927983   60307 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568/id_rsa Username:docker}
	I0625 16:45:26.022865   60307 ssh_runner.go:195] Run: systemctl --version
	I0625 16:45:26.030152   60307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 16:45:26.194751   60307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0625 16:45:26.201128   60307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 16:45:26.201193   60307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 16:45:26.217311   60307 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0625 16:45:26.217333   60307 start.go:494] detecting cgroup driver to use...
	I0625 16:45:26.217389   60307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 16:45:26.233461   60307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 16:45:26.247235   60307 docker.go:217] disabling cri-docker service (if available) ...
	I0625 16:45:26.247318   60307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 16:45:26.261760   60307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 16:45:26.275076   60307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 16:45:26.392058   60307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 16:45:26.540512   60307 docker.go:233] disabling docker service ...
	I0625 16:45:26.540603   60307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 16:45:26.555459   60307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 16:45:26.568372   60307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 16:45:26.707571   60307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 16:45:26.823486   60307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 16:45:26.839611   60307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 16:45:26.857999   60307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0625 16:45:26.858061   60307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:45:26.868366   60307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 16:45:26.868428   60307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:45:26.879042   60307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:45:26.889361   60307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:45:26.900788   60307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 16:45:26.913894   60307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 16:45:26.924524   60307 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0625 16:45:26.924568   60307 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0625 16:45:26.939102   60307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 16:45:26.949123   60307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:45:27.068029   60307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 16:45:27.216767   60307 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 16:45:27.216846   60307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 16:45:27.221745   60307 start.go:562] Will wait 60s for crictl version
	I0625 16:45:27.221783   60307 ssh_runner.go:195] Run: which crictl
	I0625 16:45:27.225934   60307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 16:45:27.269268   60307 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 16:45:27.269352   60307 ssh_runner.go:195] Run: crio --version
	I0625 16:45:27.313638   60307 ssh_runner.go:195] Run: crio --version
	I0625 16:45:27.342957   60307 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0625 16:45:27.344304   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetIP
	I0625 16:45:27.346983   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:27.347324   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:45:27.347346   60307 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:45:27.347520   60307 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0625 16:45:27.353578   60307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 16:45:27.373208   60307 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-497568 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-497568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0625 16:45:27.373313   60307 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0625 16:45:27.373379   60307 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:45:27.425465   60307 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0625 16:45:27.425538   60307 ssh_runner.go:195] Run: which lz4
	I0625 16:45:27.430012   60307 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0625 16:45:27.436562   60307 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0625 16:45:27.436586   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0625 16:45:29.184167   60307 crio.go:462] duration metric: took 1.754194325s to copy over tarball
	I0625 16:45:29.184239   60307 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0625 16:45:31.754714   60307 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.570439154s)
	I0625 16:45:31.754753   60307 crio.go:469] duration metric: took 2.570556904s to extract the tarball
	I0625 16:45:31.754764   60307 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0625 16:45:31.797162   60307 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:45:31.842332   60307 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0625 16:45:31.842366   60307 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0625 16:45:31.842500   60307 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0625 16:45:31.842534   60307 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0625 16:45:31.842557   60307 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0625 16:45:31.842492   60307 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0625 16:45:31.842502   60307 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0625 16:45:31.842533   60307 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0625 16:45:31.842485   60307 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0625 16:45:31.842455   60307 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0625 16:45:31.844055   60307 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0625 16:45:31.844090   60307 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0625 16:45:31.844120   60307 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0625 16:45:31.844144   60307 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0625 16:45:31.844149   60307 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0625 16:45:31.844226   60307 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0625 16:45:31.844221   60307 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0625 16:45:31.844769   60307 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0625 16:45:32.030442   60307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0625 16:45:32.037357   60307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0625 16:45:32.046554   60307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0625 16:45:32.047038   60307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0625 16:45:32.058835   60307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0625 16:45:32.078221   60307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0625 16:45:32.110802   60307 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0625 16:45:32.110854   60307 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0625 16:45:32.110902   60307 ssh_runner.go:195] Run: which crictl
	I0625 16:45:32.113774   60307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0625 16:45:32.116615   60307 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0625 16:45:32.116653   60307 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0625 16:45:32.116692   60307 ssh_runner.go:195] Run: which crictl
	I0625 16:45:32.163574   60307 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0625 16:45:32.163620   60307 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0625 16:45:32.163676   60307 ssh_runner.go:195] Run: which crictl
	I0625 16:45:32.194125   60307 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0625 16:45:32.194170   60307 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0625 16:45:32.194172   60307 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0625 16:45:32.194200   60307 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0625 16:45:32.194213   60307 ssh_runner.go:195] Run: which crictl
	I0625 16:45:32.194249   60307 ssh_runner.go:195] Run: which crictl
	I0625 16:45:32.208754   60307 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0625 16:45:32.208794   60307 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0625 16:45:32.208816   60307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0625 16:45:32.208833   60307 ssh_runner.go:195] Run: which crictl
	I0625 16:45:32.221857   60307 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0625 16:45:32.221895   60307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0625 16:45:32.221902   60307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0625 16:45:32.221910   60307 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0625 16:45:32.221946   60307 ssh_runner.go:195] Run: which crictl
	I0625 16:45:32.221968   60307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0625 16:45:32.221980   60307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0625 16:45:32.271903   60307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0625 16:45:32.271946   60307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0625 16:45:32.319057   60307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0625 16:45:32.319088   60307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0625 16:45:32.328139   60307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0625 16:45:32.328219   60307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0625 16:45:32.328282   60307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0625 16:45:32.357155   60307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0625 16:45:32.369277   60307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0625 16:45:32.840621   60307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0625 16:45:32.979161   60307 cache_images.go:92] duration metric: took 1.136773917s to LoadCachedImages
	W0625 16:45:32.979277   60307 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19128-13846/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19128-13846/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0625 16:45:32.979296   60307 kubeadm.go:928] updating node { 192.168.39.64 8443 v1.20.0 crio true true} ...
	I0625 16:45:32.979413   60307 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-497568 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-497568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 16:45:32.979490   60307 ssh_runner.go:195] Run: crio config
	I0625 16:45:33.034898   60307 cni.go:84] Creating CNI manager for ""
	I0625 16:45:33.034923   60307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 16:45:33.034931   60307 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0625 16:45:33.034949   60307 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.64 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-497568 NodeName:kubernetes-upgrade-497568 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0625 16:45:33.035133   60307 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-497568"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0625 16:45:33.035214   60307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0625 16:45:33.045377   60307 binaries.go:44] Found k8s binaries, skipping transfer
	I0625 16:45:33.045444   60307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0625 16:45:33.055651   60307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0625 16:45:33.073300   60307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 16:45:33.091249   60307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0625 16:45:33.108200   60307 ssh_runner.go:195] Run: grep 192.168.39.64	control-plane.minikube.internal$ /etc/hosts
	I0625 16:45:33.112203   60307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 16:45:33.124574   60307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:45:33.246179   60307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 16:45:33.263646   60307 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568 for IP: 192.168.39.64
	I0625 16:45:33.263672   60307 certs.go:194] generating shared ca certs ...
	I0625 16:45:33.263691   60307 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:45:33.263878   60307 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 16:45:33.263938   60307 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 16:45:33.263953   60307 certs.go:256] generating profile certs ...
	I0625 16:45:33.264026   60307 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/client.key
	I0625 16:45:33.264057   60307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/client.crt with IP's: []
	I0625 16:45:33.465683   60307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/client.crt ...
	I0625 16:45:33.465713   60307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/client.crt: {Name:mke5a399f5fa3bf4d748f526a810b9a43e4c4c7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:45:33.465884   60307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/client.key ...
	I0625 16:45:33.465897   60307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/client.key: {Name:mk0b8d64ef42fca0d9edd3be6808b30cfea159ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:45:33.465967   60307 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/apiserver.key.c4e9e74f
	I0625 16:45:33.465985   60307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/apiserver.crt.c4e9e74f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.64]
	I0625 16:45:33.811571   60307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/apiserver.crt.c4e9e74f ...
	I0625 16:45:33.811604   60307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/apiserver.crt.c4e9e74f: {Name:mkc8af757f49639a2ed1f5ff337b119ecc89dd6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:45:33.811757   60307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/apiserver.key.c4e9e74f ...
	I0625 16:45:33.811769   60307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/apiserver.key.c4e9e74f: {Name:mkc351294e3417fc36abc38fc35250bf925ee056 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:45:33.811831   60307 certs.go:381] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/apiserver.crt.c4e9e74f -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/apiserver.crt
	I0625 16:45:33.811915   60307 certs.go:385] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/apiserver.key.c4e9e74f -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/apiserver.key
	I0625 16:45:33.811970   60307 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/proxy-client.key
	I0625 16:45:33.811985   60307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/proxy-client.crt with IP's: []
	I0625 16:45:33.923906   60307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/proxy-client.crt ...
	I0625 16:45:33.923936   60307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/proxy-client.crt: {Name:mk7a88bf5b881d8b070794e41b0b089c342500aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:45:33.924121   60307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/proxy-client.key ...
	I0625 16:45:33.924144   60307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/proxy-client.key: {Name:mkf766b4a53c4eb324db5dc8ce16b16bd0b03bf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:45:33.924384   60307 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem (1338 bytes)
	W0625 16:45:33.924429   60307 certs.go:480] ignoring /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239_empty.pem, impossibly tiny 0 bytes
	I0625 16:45:33.924445   60307 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 16:45:33.924475   60307 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 16:45:33.924503   60307 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 16:45:33.924535   60307 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 16:45:33.924587   60307 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:45:33.925236   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 16:45:33.951156   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 16:45:33.979819   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 16:45:34.004094   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 16:45:34.027136   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0625 16:45:34.050447   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0625 16:45:34.073386   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 16:45:34.098384   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0625 16:45:34.121873   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 16:45:34.147258   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem --> /usr/share/ca-certificates/21239.pem (1338 bytes)
	I0625 16:45:34.181611   60307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /usr/share/ca-certificates/212392.pem (1708 bytes)
	I0625 16:45:34.212258   60307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0625 16:45:34.228751   60307 ssh_runner.go:195] Run: openssl version
	I0625 16:45:34.234415   60307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212392.pem && ln -fs /usr/share/ca-certificates/212392.pem /etc/ssl/certs/212392.pem"
	I0625 16:45:34.245428   60307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212392.pem
	I0625 16:45:34.249753   60307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 16:45:34.249800   60307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212392.pem
	I0625 16:45:34.255494   60307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/212392.pem /etc/ssl/certs/3ec20f2e.0"
	I0625 16:45:34.266101   60307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 16:45:34.277233   60307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:45:34.281711   60307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:45:34.281761   60307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:45:34.287269   60307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 16:45:34.298281   60307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21239.pem && ln -fs /usr/share/ca-certificates/21239.pem /etc/ssl/certs/21239.pem"
	I0625 16:45:34.309648   60307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21239.pem
	I0625 16:45:34.313980   60307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 16:45:34.314035   60307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21239.pem
	I0625 16:45:34.319755   60307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21239.pem /etc/ssl/certs/51391683.0"
	I0625 16:45:34.330863   60307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 16:45:34.334992   60307 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0625 16:45:34.335049   60307 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-497568 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-497568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:45:34.335147   60307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0625 16:45:34.335185   60307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0625 16:45:34.373921   60307 cri.go:89] found id: ""
	I0625 16:45:34.374001   60307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0625 16:45:34.384515   60307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0625 16:45:34.396317   60307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0625 16:45:34.406377   60307 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0625 16:45:34.406399   60307 kubeadm.go:156] found existing configuration files:
	
	I0625 16:45:34.406442   60307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0625 16:45:34.415792   60307 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0625 16:45:34.415849   60307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0625 16:45:34.425307   60307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0625 16:45:34.434338   60307 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0625 16:45:34.434393   60307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0625 16:45:34.443625   60307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0625 16:45:34.452466   60307 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0625 16:45:34.452517   60307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0625 16:45:34.461810   60307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0625 16:45:34.471061   60307 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0625 16:45:34.471111   60307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0625 16:45:34.480690   60307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0625 16:45:34.589380   60307 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0625 16:45:34.589446   60307 kubeadm.go:309] [preflight] Running pre-flight checks
	I0625 16:45:34.730248   60307 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0625 16:45:34.730419   60307 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0625 16:45:34.730566   60307 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0625 16:45:34.896863   60307 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0625 16:45:34.899929   60307 out.go:204]   - Generating certificates and keys ...
	I0625 16:45:34.900033   60307 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0625 16:45:34.900136   60307 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0625 16:45:35.172563   60307 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0625 16:45:35.307944   60307 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0625 16:45:35.439606   60307 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0625 16:45:35.801660   60307 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0625 16:45:35.985476   60307 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0625 16:45:35.985686   60307 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-497568 localhost] and IPs [192.168.39.64 127.0.0.1 ::1]
	I0625 16:45:36.097677   60307 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0625 16:45:36.097910   60307 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-497568 localhost] and IPs [192.168.39.64 127.0.0.1 ::1]
	I0625 16:45:36.230305   60307 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0625 16:45:36.460668   60307 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0625 16:45:36.695954   60307 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0625 16:45:36.696084   60307 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0625 16:45:37.119547   60307 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0625 16:45:37.192373   60307 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0625 16:45:37.368101   60307 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0625 16:45:37.560074   60307 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0625 16:45:37.575096   60307 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0625 16:45:37.576168   60307 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0625 16:45:37.576235   60307 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0625 16:45:37.714604   60307 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0625 16:45:37.716472   60307 out.go:204]   - Booting up control plane ...
	I0625 16:45:37.716611   60307 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0625 16:45:37.725155   60307 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0625 16:45:37.726238   60307 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0625 16:45:37.727031   60307 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0625 16:45:37.731148   60307 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0625 16:46:17.725044   60307 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0625 16:46:17.726352   60307 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0625 16:46:17.726677   60307 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0625 16:46:22.726923   60307 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0625 16:46:22.727197   60307 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0625 16:46:32.726610   60307 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0625 16:46:32.726867   60307 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0625 16:46:52.726576   60307 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0625 16:46:52.726849   60307 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0625 16:47:32.728210   60307 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0625 16:47:32.728515   60307 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0625 16:47:32.728535   60307 kubeadm.go:309] 
	I0625 16:47:32.728569   60307 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0625 16:47:32.728604   60307 kubeadm.go:309] 		timed out waiting for the condition
	I0625 16:47:32.728611   60307 kubeadm.go:309] 
	I0625 16:47:32.728644   60307 kubeadm.go:309] 	This error is likely caused by:
	I0625 16:47:32.728681   60307 kubeadm.go:309] 		- The kubelet is not running
	I0625 16:47:32.728819   60307 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0625 16:47:32.728844   60307 kubeadm.go:309] 
	I0625 16:47:32.728973   60307 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0625 16:47:32.729037   60307 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0625 16:47:32.729095   60307 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0625 16:47:32.729107   60307 kubeadm.go:309] 
	I0625 16:47:32.729251   60307 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0625 16:47:32.729366   60307 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0625 16:47:32.729377   60307 kubeadm.go:309] 
	I0625 16:47:32.729526   60307 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0625 16:47:32.729628   60307 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0625 16:47:32.729689   60307 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0625 16:47:32.729752   60307 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0625 16:47:32.729767   60307 kubeadm.go:309] 
	I0625 16:47:32.730428   60307 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0625 16:47:32.730574   60307 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0625 16:47:32.730672   60307 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0625 16:47:32.730822   60307 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-497568 localhost] and IPs [192.168.39.64 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-497568 localhost] and IPs [192.168.39.64 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-497568 localhost] and IPs [192.168.39.64 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-497568 localhost] and IPs [192.168.39.64 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0625 16:47:32.730881   60307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0625 16:47:33.838896   60307 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.10799006s)
	I0625 16:47:33.838973   60307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:47:33.853892   60307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0625 16:47:33.863719   60307 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0625 16:47:33.863738   60307 kubeadm.go:156] found existing configuration files:
	
	I0625 16:47:33.863792   60307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0625 16:47:33.874204   60307 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0625 16:47:33.874294   60307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0625 16:47:33.883842   60307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0625 16:47:33.892685   60307 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0625 16:47:33.892738   60307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0625 16:47:33.901739   60307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0625 16:47:33.910740   60307 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0625 16:47:33.910800   60307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0625 16:47:33.920920   60307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0625 16:47:33.930358   60307 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0625 16:47:33.930440   60307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0625 16:47:33.939882   60307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0625 16:47:34.162434   60307 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0625 16:49:30.410948   60307 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0625 16:49:30.411113   60307 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0625 16:49:30.412858   60307 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0625 16:49:30.412929   60307 kubeadm.go:309] [preflight] Running pre-flight checks
	I0625 16:49:30.413018   60307 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0625 16:49:30.413139   60307 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0625 16:49:30.413417   60307 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0625 16:49:30.413482   60307 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0625 16:49:30.414958   60307 out.go:204]   - Generating certificates and keys ...
	I0625 16:49:30.415068   60307 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0625 16:49:30.415166   60307 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0625 16:49:30.415278   60307 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0625 16:49:30.415370   60307 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0625 16:49:30.415477   60307 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0625 16:49:30.415552   60307 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0625 16:49:30.415641   60307 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0625 16:49:30.415720   60307 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0625 16:49:30.415792   60307 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0625 16:49:30.415860   60307 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0625 16:49:30.415893   60307 kubeadm.go:309] [certs] Using the existing "sa" key
	I0625 16:49:30.415939   60307 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0625 16:49:30.416002   60307 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0625 16:49:30.416081   60307 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0625 16:49:30.416150   60307 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0625 16:49:30.416197   60307 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0625 16:49:30.416345   60307 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0625 16:49:30.416529   60307 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0625 16:49:30.416601   60307 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0625 16:49:30.416700   60307 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0625 16:49:30.418268   60307 out.go:204]   - Booting up control plane ...
	I0625 16:49:30.418356   60307 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0625 16:49:30.418429   60307 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0625 16:49:30.418517   60307 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0625 16:49:30.418587   60307 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0625 16:49:30.418781   60307 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0625 16:49:30.418831   60307 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0625 16:49:30.418916   60307 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0625 16:49:30.419130   60307 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0625 16:49:30.419232   60307 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0625 16:49:30.419467   60307 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0625 16:49:30.419578   60307 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0625 16:49:30.419760   60307 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0625 16:49:30.419847   60307 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0625 16:49:30.420065   60307 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0625 16:49:30.420152   60307 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0625 16:49:30.420325   60307 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0625 16:49:30.420333   60307 kubeadm.go:309] 
	I0625 16:49:30.420371   60307 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0625 16:49:30.420405   60307 kubeadm.go:309] 		timed out waiting for the condition
	I0625 16:49:30.420413   60307 kubeadm.go:309] 
	I0625 16:49:30.420448   60307 kubeadm.go:309] 	This error is likely caused by:
	I0625 16:49:30.420477   60307 kubeadm.go:309] 		- The kubelet is not running
	I0625 16:49:30.420569   60307 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0625 16:49:30.420576   60307 kubeadm.go:309] 
	I0625 16:49:30.420709   60307 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0625 16:49:30.420743   60307 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0625 16:49:30.420772   60307 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0625 16:49:30.420781   60307 kubeadm.go:309] 
	I0625 16:49:30.420889   60307 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0625 16:49:30.420999   60307 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0625 16:49:30.421011   60307 kubeadm.go:309] 
	I0625 16:49:30.421137   60307 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0625 16:49:30.421244   60307 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0625 16:49:30.421349   60307 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0625 16:49:30.421442   60307 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0625 16:49:30.421473   60307 kubeadm.go:309] 
	I0625 16:49:30.421512   60307 kubeadm.go:393] duration metric: took 3m56.086467213s to StartCluster
	I0625 16:49:30.421554   60307 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0625 16:49:30.421614   60307 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0625 16:49:30.472992   60307 cri.go:89] found id: ""
	I0625 16:49:30.473024   60307 logs.go:276] 0 containers: []
	W0625 16:49:30.473034   60307 logs.go:278] No container was found matching "kube-apiserver"
	I0625 16:49:30.473042   60307 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0625 16:49:30.473114   60307 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0625 16:49:30.513235   60307 cri.go:89] found id: ""
	I0625 16:49:30.513266   60307 logs.go:276] 0 containers: []
	W0625 16:49:30.513280   60307 logs.go:278] No container was found matching "etcd"
	I0625 16:49:30.513288   60307 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0625 16:49:30.513351   60307 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0625 16:49:30.547193   60307 cri.go:89] found id: ""
	I0625 16:49:30.547226   60307 logs.go:276] 0 containers: []
	W0625 16:49:30.547238   60307 logs.go:278] No container was found matching "coredns"
	I0625 16:49:30.547246   60307 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0625 16:49:30.547313   60307 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0625 16:49:30.587446   60307 cri.go:89] found id: ""
	I0625 16:49:30.587474   60307 logs.go:276] 0 containers: []
	W0625 16:49:30.587484   60307 logs.go:278] No container was found matching "kube-scheduler"
	I0625 16:49:30.587492   60307 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0625 16:49:30.587562   60307 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0625 16:49:30.628150   60307 cri.go:89] found id: ""
	I0625 16:49:30.628180   60307 logs.go:276] 0 containers: []
	W0625 16:49:30.628188   60307 logs.go:278] No container was found matching "kube-proxy"
	I0625 16:49:30.628195   60307 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0625 16:49:30.628258   60307 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0625 16:49:30.667288   60307 cri.go:89] found id: ""
	I0625 16:49:30.667316   60307 logs.go:276] 0 containers: []
	W0625 16:49:30.667327   60307 logs.go:278] No container was found matching "kube-controller-manager"
	I0625 16:49:30.667336   60307 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0625 16:49:30.667401   60307 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0625 16:49:30.707834   60307 cri.go:89] found id: ""
	I0625 16:49:30.707866   60307 logs.go:276] 0 containers: []
	W0625 16:49:30.707878   60307 logs.go:278] No container was found matching "kindnet"
	I0625 16:49:30.707889   60307 logs.go:123] Gathering logs for kubelet ...
	I0625 16:49:30.707906   60307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0625 16:49:30.762024   60307 logs.go:123] Gathering logs for dmesg ...
	I0625 16:49:30.762069   60307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0625 16:49:30.779180   60307 logs.go:123] Gathering logs for describe nodes ...
	I0625 16:49:30.779229   60307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0625 16:49:30.911671   60307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0625 16:49:30.911697   60307 logs.go:123] Gathering logs for CRI-O ...
	I0625 16:49:30.911714   60307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0625 16:49:31.004232   60307 logs.go:123] Gathering logs for container status ...
	I0625 16:49:31.004269   60307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0625 16:49:31.055746   60307 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0625 16:49:31.055798   60307 out.go:239] * 
	* 
	W0625 16:49:31.055865   60307 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0625 16:49:31.055898   60307 out.go:239] * 
	* 
	W0625 16:49:31.057053   60307 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0625 16:49:31.060891   60307 out.go:177] 
	W0625 16:49:31.062207   60307 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0625 16:49:31.062325   60307 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0625 16:49:31.062399   60307 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0625 16:49:31.064408   60307 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-497568 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-497568
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-497568: (2.31086801s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-497568 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-497568 status --format={{.Host}}: exit status 7 (76.817893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497568 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-497568 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.817867719s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-497568 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497568 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-497568 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (77.527883ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-497568] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-497568
	    minikube start -p kubernetes-upgrade-497568 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4975682 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.2, by running:
	    
	    minikube start -p kubernetes-upgrade-497568 --kubernetes-version=v1.30.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497568 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-497568 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.471375145s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-06-25 16:51:39.93620363 +0000 UTC m=+6136.081237687
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-497568 -n kubernetes-upgrade-497568
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-497568 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-497568 logs -n 25: (1.687881801s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-514698 sudo                                | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo cat                            | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo cat                            | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                                | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                                | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                                | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo cat                            | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo cat                            | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                                | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                                | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                                | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo find                           | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo crio                           | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-514698                                     | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC | 25 Jun 24 16:49 UTC |
	| stop    | -p kubernetes-upgrade-497568                         | kubernetes-upgrade-497568 | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC | 25 Jun 24 16:49 UTC |
	| start   | -p kubernetes-upgrade-497568                         | kubernetes-upgrade-497568 | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC | 25 Jun 24 16:50 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p cert-expiration-076008                            | cert-expiration-076008    | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC | 25 Jun 24 16:50 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-759584                          | force-systemd-env-759584  | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC | 25 Jun 24 16:50 UTC |
	| start   | -p force-systemd-flag-740596                         | force-systemd-flag-740596 | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC | 25 Jun 24 16:51 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-756277                                      | pause-756277              | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-497568                         | kubernetes-upgrade-497568 | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-497568                         | kubernetes-upgrade-497568 | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC | 25 Jun 24 16:51 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-740596 ssh cat                    | force-systemd-flag-740596 | jenkins | v1.33.1 | 25 Jun 24 16:51 UTC | 25 Jun 24 16:51 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-740596                         | force-systemd-flag-740596 | jenkins | v1.33.1 | 25 Jun 24 16:51 UTC | 25 Jun 24 16:51 UTC |
	| start   | -p cert-options-742979                               | cert-options-742979       | jenkins | v1.33.1 | 25 Jun 24 16:51 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/25 16:51:20
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0625 16:51:20.074068   67510 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:51:20.074259   67510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:51:20.074262   67510 out.go:304] Setting ErrFile to fd 2...
	I0625 16:51:20.074265   67510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:51:20.074444   67510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:51:20.074997   67510 out.go:298] Setting JSON to false
	I0625 16:51:20.075926   67510 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9224,"bootTime":1719325056,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 16:51:20.075971   67510 start.go:139] virtualization: kvm guest
	I0625 16:51:20.078016   67510 out.go:177] * [cert-options-742979] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0625 16:51:20.079514   67510 notify.go:220] Checking for updates...
	I0625 16:51:20.079621   67510 out.go:177]   - MINIKUBE_LOCATION=19128
	I0625 16:51:20.080923   67510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 16:51:20.082143   67510 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 16:51:20.083474   67510 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:51:20.084665   67510 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0625 16:51:20.085854   67510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0625 16:51:20.087491   67510 config.go:182] Loaded profile config "cert-expiration-076008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:51:20.087618   67510 config.go:182] Loaded profile config "kubernetes-upgrade-497568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:51:20.087797   67510 config.go:182] Loaded profile config "pause-756277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:51:20.087885   67510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 16:51:20.123405   67510 out.go:177] * Using the kvm2 driver based on user configuration
	I0625 16:51:20.124530   67510 start.go:297] selected driver: kvm2
	I0625 16:51:20.124535   67510 start.go:901] validating driver "kvm2" against <nil>
	I0625 16:51:20.124544   67510 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0625 16:51:20.125216   67510 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:51:20.125286   67510 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19128-13846/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0625 16:51:20.139888   67510 install.go:137] /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0625 16:51:20.139920   67510 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0625 16:51:20.140100   67510 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0625 16:51:20.140149   67510 cni.go:84] Creating CNI manager for ""
	I0625 16:51:20.140157   67510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 16:51:20.140163   67510 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0625 16:51:20.140214   67510 start.go:340] cluster config:
	{Name:cert-options-742979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:cert-options-742979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0625 16:51:20.140292   67510 iso.go:125] acquiring lock: {Name:mk76df652d5e768afc73443035d5ecb8b75ed16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:51:20.141884   67510 out.go:177] * Starting "cert-options-742979" primary control-plane node in "cert-options-742979" cluster
	I0625 16:51:20.143090   67510 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 16:51:20.143114   67510 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0625 16:51:20.143119   67510 cache.go:56] Caching tarball of preloaded images
	I0625 16:51:20.143179   67510 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 16:51:20.143184   67510 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0625 16:51:20.143257   67510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/config.json ...
	I0625 16:51:20.143268   67510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/config.json: {Name:mk7d6ca68175346f98a68aa85d9d9b0a0911fc3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:20.143378   67510 start.go:360] acquireMachinesLock for cert-options-742979: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 16:51:20.143399   67510 start.go:364] duration metric: took 14.173µs to acquireMachinesLock for "cert-options-742979"
	I0625 16:51:20.143411   67510 start.go:93] Provisioning new machine with config: &{Name:cert-options-742979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.2 ClusterName:cert-options-742979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 16:51:20.143465   67510 start.go:125] createHost starting for "" (driver="kvm2")
	I0625 16:51:20.145021   67510 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0625 16:51:20.145154   67510 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2
	I0625 16:51:20.145189   67510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:51:20.158810   67510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35817
	I0625 16:51:20.159211   67510 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:51:20.159775   67510 main.go:141] libmachine: Using API Version  1
	I0625 16:51:20.159783   67510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:51:20.160084   67510 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:51:20.160280   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetMachineName
	I0625 16:51:20.160398   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:20.160522   67510 start.go:159] libmachine.API.Create for "cert-options-742979" (driver="kvm2")
	I0625 16:51:20.160547   67510 client.go:168] LocalClient.Create starting
	I0625 16:51:20.160573   67510 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem
	I0625 16:51:20.160603   67510 main.go:141] libmachine: Decoding PEM data...
	I0625 16:51:20.160618   67510 main.go:141] libmachine: Parsing certificate...
	I0625 16:51:20.160669   67510 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem
	I0625 16:51:20.160682   67510 main.go:141] libmachine: Decoding PEM data...
	I0625 16:51:20.160689   67510 main.go:141] libmachine: Parsing certificate...
	I0625 16:51:20.160700   67510 main.go:141] libmachine: Running pre-create checks...
	I0625 16:51:20.160704   67510 main.go:141] libmachine: (cert-options-742979) Calling .PreCreateCheck
	I0625 16:51:20.161012   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetConfigRaw
	I0625 16:51:20.161374   67510 main.go:141] libmachine: Creating machine...
	I0625 16:51:20.161384   67510 main.go:141] libmachine: (cert-options-742979) Calling .Create
	I0625 16:51:20.161511   67510 main.go:141] libmachine: (cert-options-742979) Creating KVM machine...
	I0625 16:51:20.162785   67510 main.go:141] libmachine: (cert-options-742979) DBG | found existing default KVM network
	I0625 16:51:20.163882   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:20.163699   67533 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:dc:9c:16} reservation:<nil>}
	I0625 16:51:20.164654   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:20.164577   67533 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:3a:79:d3} reservation:<nil>}
	I0625 16:51:20.166733   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:20.166619   67533 network.go:209] skipping subnet 192.168.61.0/24 that is reserved: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0625 16:51:20.167599   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:20.167532   67533 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:62:fe:83} reservation:<nil>}
	I0625 16:51:20.168590   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:20.168529   67533 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001135d0}
	I0625 16:51:20.168607   67510 main.go:141] libmachine: (cert-options-742979) DBG | created network xml: 
	I0625 16:51:20.168612   67510 main.go:141] libmachine: (cert-options-742979) DBG | <network>
	I0625 16:51:20.168618   67510 main.go:141] libmachine: (cert-options-742979) DBG |   <name>mk-cert-options-742979</name>
	I0625 16:51:20.168623   67510 main.go:141] libmachine: (cert-options-742979) DBG |   <dns enable='no'/>
	I0625 16:51:20.168628   67510 main.go:141] libmachine: (cert-options-742979) DBG |   
	I0625 16:51:20.168638   67510 main.go:141] libmachine: (cert-options-742979) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0625 16:51:20.168643   67510 main.go:141] libmachine: (cert-options-742979) DBG |     <dhcp>
	I0625 16:51:20.168647   67510 main.go:141] libmachine: (cert-options-742979) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0625 16:51:20.168651   67510 main.go:141] libmachine: (cert-options-742979) DBG |     </dhcp>
	I0625 16:51:20.168655   67510 main.go:141] libmachine: (cert-options-742979) DBG |   </ip>
	I0625 16:51:20.168660   67510 main.go:141] libmachine: (cert-options-742979) DBG |   
	I0625 16:51:20.168664   67510 main.go:141] libmachine: (cert-options-742979) DBG | </network>
	I0625 16:51:20.168668   67510 main.go:141] libmachine: (cert-options-742979) DBG | 
	I0625 16:51:20.173651   67510 main.go:141] libmachine: (cert-options-742979) DBG | trying to create private KVM network mk-cert-options-742979 192.168.83.0/24...
	I0625 16:51:20.237908   67510 main.go:141] libmachine: (cert-options-742979) DBG | private KVM network mk-cert-options-742979 192.168.83.0/24 created
	I0625 16:51:20.237932   67510 main.go:141] libmachine: (cert-options-742979) Setting up store path in /home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979 ...
	I0625 16:51:20.237952   67510 main.go:141] libmachine: (cert-options-742979) Building disk image from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso
	I0625 16:51:20.238008   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:20.237929   67533 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:51:20.238099   67510 main.go:141] libmachine: (cert-options-742979) Downloading /home/jenkins/minikube-integration/19128-13846/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso...
	I0625 16:51:20.455903   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:20.455760   67533 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/id_rsa...
	I0625 16:51:20.527888   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:20.527739   67533 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/cert-options-742979.rawdisk...
	I0625 16:51:20.527915   67510 main.go:141] libmachine: (cert-options-742979) DBG | Writing magic tar header
	I0625 16:51:20.527933   67510 main.go:141] libmachine: (cert-options-742979) DBG | Writing SSH key tar header
	I0625 16:51:20.527945   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:20.527881   67533 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979 ...
	I0625 16:51:20.528047   67510 main.go:141] libmachine: (cert-options-742979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979
	I0625 16:51:20.528060   67510 main.go:141] libmachine: (cert-options-742979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines
	I0625 16:51:20.528068   67510 main.go:141] libmachine: (cert-options-742979) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979 (perms=drwx------)
	I0625 16:51:20.528076   67510 main.go:141] libmachine: (cert-options-742979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:51:20.528083   67510 main.go:141] libmachine: (cert-options-742979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846
	I0625 16:51:20.528088   67510 main.go:141] libmachine: (cert-options-742979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0625 16:51:20.528093   67510 main.go:141] libmachine: (cert-options-742979) DBG | Checking permissions on dir: /home/jenkins
	I0625 16:51:20.528098   67510 main.go:141] libmachine: (cert-options-742979) DBG | Checking permissions on dir: /home
	I0625 16:51:20.528105   67510 main.go:141] libmachine: (cert-options-742979) DBG | Skipping /home - not owner
	I0625 16:51:20.528113   67510 main.go:141] libmachine: (cert-options-742979) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines (perms=drwxr-xr-x)
	I0625 16:51:20.528118   67510 main.go:141] libmachine: (cert-options-742979) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube (perms=drwxr-xr-x)
	I0625 16:51:20.528126   67510 main.go:141] libmachine: (cert-options-742979) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846 (perms=drwxrwxr-x)
	I0625 16:51:20.528131   67510 main.go:141] libmachine: (cert-options-742979) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0625 16:51:20.528136   67510 main.go:141] libmachine: (cert-options-742979) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0625 16:51:20.528140   67510 main.go:141] libmachine: (cert-options-742979) Creating domain...
	I0625 16:51:20.529248   67510 main.go:141] libmachine: (cert-options-742979) define libvirt domain using xml: 
	I0625 16:51:20.529261   67510 main.go:141] libmachine: (cert-options-742979) <domain type='kvm'>
	I0625 16:51:20.529267   67510 main.go:141] libmachine: (cert-options-742979)   <name>cert-options-742979</name>
	I0625 16:51:20.529271   67510 main.go:141] libmachine: (cert-options-742979)   <memory unit='MiB'>2048</memory>
	I0625 16:51:20.529275   67510 main.go:141] libmachine: (cert-options-742979)   <vcpu>2</vcpu>
	I0625 16:51:20.529285   67510 main.go:141] libmachine: (cert-options-742979)   <features>
	I0625 16:51:20.529290   67510 main.go:141] libmachine: (cert-options-742979)     <acpi/>
	I0625 16:51:20.529293   67510 main.go:141] libmachine: (cert-options-742979)     <apic/>
	I0625 16:51:20.529298   67510 main.go:141] libmachine: (cert-options-742979)     <pae/>
	I0625 16:51:20.529302   67510 main.go:141] libmachine: (cert-options-742979)     
	I0625 16:51:20.529308   67510 main.go:141] libmachine: (cert-options-742979)   </features>
	I0625 16:51:20.529316   67510 main.go:141] libmachine: (cert-options-742979)   <cpu mode='host-passthrough'>
	I0625 16:51:20.529323   67510 main.go:141] libmachine: (cert-options-742979)   
	I0625 16:51:20.529329   67510 main.go:141] libmachine: (cert-options-742979)   </cpu>
	I0625 16:51:20.529335   67510 main.go:141] libmachine: (cert-options-742979)   <os>
	I0625 16:51:20.529346   67510 main.go:141] libmachine: (cert-options-742979)     <type>hvm</type>
	I0625 16:51:20.529351   67510 main.go:141] libmachine: (cert-options-742979)     <boot dev='cdrom'/>
	I0625 16:51:20.529354   67510 main.go:141] libmachine: (cert-options-742979)     <boot dev='hd'/>
	I0625 16:51:20.529359   67510 main.go:141] libmachine: (cert-options-742979)     <bootmenu enable='no'/>
	I0625 16:51:20.529362   67510 main.go:141] libmachine: (cert-options-742979)   </os>
	I0625 16:51:20.529366   67510 main.go:141] libmachine: (cert-options-742979)   <devices>
	I0625 16:51:20.529372   67510 main.go:141] libmachine: (cert-options-742979)     <disk type='file' device='cdrom'>
	I0625 16:51:20.529382   67510 main.go:141] libmachine: (cert-options-742979)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/boot2docker.iso'/>
	I0625 16:51:20.529385   67510 main.go:141] libmachine: (cert-options-742979)       <target dev='hdc' bus='scsi'/>
	I0625 16:51:20.529391   67510 main.go:141] libmachine: (cert-options-742979)       <readonly/>
	I0625 16:51:20.529397   67510 main.go:141] libmachine: (cert-options-742979)     </disk>
	I0625 16:51:20.529405   67510 main.go:141] libmachine: (cert-options-742979)     <disk type='file' device='disk'>
	I0625 16:51:20.529414   67510 main.go:141] libmachine: (cert-options-742979)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0625 16:51:20.529435   67510 main.go:141] libmachine: (cert-options-742979)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/cert-options-742979.rawdisk'/>
	I0625 16:51:20.529444   67510 main.go:141] libmachine: (cert-options-742979)       <target dev='hda' bus='virtio'/>
	I0625 16:51:20.529450   67510 main.go:141] libmachine: (cert-options-742979)     </disk>
	I0625 16:51:20.529458   67510 main.go:141] libmachine: (cert-options-742979)     <interface type='network'>
	I0625 16:51:20.529463   67510 main.go:141] libmachine: (cert-options-742979)       <source network='mk-cert-options-742979'/>
	I0625 16:51:20.529467   67510 main.go:141] libmachine: (cert-options-742979)       <model type='virtio'/>
	I0625 16:51:20.529472   67510 main.go:141] libmachine: (cert-options-742979)     </interface>
	I0625 16:51:20.529476   67510 main.go:141] libmachine: (cert-options-742979)     <interface type='network'>
	I0625 16:51:20.529480   67510 main.go:141] libmachine: (cert-options-742979)       <source network='default'/>
	I0625 16:51:20.529483   67510 main.go:141] libmachine: (cert-options-742979)       <model type='virtio'/>
	I0625 16:51:20.529488   67510 main.go:141] libmachine: (cert-options-742979)     </interface>
	I0625 16:51:20.529492   67510 main.go:141] libmachine: (cert-options-742979)     <serial type='pty'>
	I0625 16:51:20.529496   67510 main.go:141] libmachine: (cert-options-742979)       <target port='0'/>
	I0625 16:51:20.529499   67510 main.go:141] libmachine: (cert-options-742979)     </serial>
	I0625 16:51:20.529503   67510 main.go:141] libmachine: (cert-options-742979)     <console type='pty'>
	I0625 16:51:20.529507   67510 main.go:141] libmachine: (cert-options-742979)       <target type='serial' port='0'/>
	I0625 16:51:20.529511   67510 main.go:141] libmachine: (cert-options-742979)     </console>
	I0625 16:51:20.529515   67510 main.go:141] libmachine: (cert-options-742979)     <rng model='virtio'>
	I0625 16:51:20.529519   67510 main.go:141] libmachine: (cert-options-742979)       <backend model='random'>/dev/random</backend>
	I0625 16:51:20.529536   67510 main.go:141] libmachine: (cert-options-742979)     </rng>
	I0625 16:51:20.529541   67510 main.go:141] libmachine: (cert-options-742979)     
	I0625 16:51:20.529544   67510 main.go:141] libmachine: (cert-options-742979)     
	I0625 16:51:20.529553   67510 main.go:141] libmachine: (cert-options-742979)   </devices>
	I0625 16:51:20.529556   67510 main.go:141] libmachine: (cert-options-742979) </domain>
	I0625 16:51:20.529563   67510 main.go:141] libmachine: (cert-options-742979) 
	I0625 16:51:20.533714   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:96:b7:89 in network default
	I0625 16:51:20.534242   67510 main.go:141] libmachine: (cert-options-742979) Ensuring networks are active...
	I0625 16:51:20.534255   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:20.534889   67510 main.go:141] libmachine: (cert-options-742979) Ensuring network default is active
	I0625 16:51:20.535118   67510 main.go:141] libmachine: (cert-options-742979) Ensuring network mk-cert-options-742979 is active
	I0625 16:51:20.535525   67510 main.go:141] libmachine: (cert-options-742979) Getting domain xml...
	I0625 16:51:20.536190   67510 main.go:141] libmachine: (cert-options-742979) Creating domain...
	I0625 16:51:21.733494   67510 main.go:141] libmachine: (cert-options-742979) Waiting to get IP...
	I0625 16:51:21.734188   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:21.734780   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find current IP address of domain cert-options-742979 in network mk-cert-options-742979
	I0625 16:51:21.734803   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:21.734722   67533 retry.go:31] will retry after 303.332293ms: waiting for machine to come up
	I0625 16:51:22.039115   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:22.039638   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find current IP address of domain cert-options-742979 in network mk-cert-options-742979
	I0625 16:51:22.039660   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:22.039583   67533 retry.go:31] will retry after 278.294581ms: waiting for machine to come up
	I0625 16:51:22.319002   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:22.319667   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find current IP address of domain cert-options-742979 in network mk-cert-options-742979
	I0625 16:51:22.319689   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:22.319612   67533 retry.go:31] will retry after 484.741594ms: waiting for machine to come up
	I0625 16:51:22.806283   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:22.806803   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find current IP address of domain cert-options-742979 in network mk-cert-options-742979
	I0625 16:51:22.806825   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:22.806757   67533 retry.go:31] will retry after 510.963919ms: waiting for machine to come up
	I0625 16:51:23.319764   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:23.320187   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find current IP address of domain cert-options-742979 in network mk-cert-options-742979
	I0625 16:51:23.320201   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:23.320158   67533 retry.go:31] will retry after 517.690696ms: waiting for machine to come up
	I0625 16:51:23.839980   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:23.840484   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find current IP address of domain cert-options-742979 in network mk-cert-options-742979
	I0625 16:51:23.840503   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:23.840424   67533 retry.go:31] will retry after 764.993567ms: waiting for machine to come up
	I0625 16:51:24.607302   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:24.607786   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find current IP address of domain cert-options-742979 in network mk-cert-options-742979
	I0625 16:51:24.607808   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:24.607733   67533 retry.go:31] will retry after 901.820011ms: waiting for machine to come up
	I0625 16:51:25.511232   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:25.511651   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find current IP address of domain cert-options-742979 in network mk-cert-options-742979
	I0625 16:51:25.511668   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:25.511611   67533 retry.go:31] will retry after 918.857224ms: waiting for machine to come up
	I0625 16:51:26.431541   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:26.431928   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find current IP address of domain cert-options-742979 in network mk-cert-options-742979
	I0625 16:51:26.431946   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:26.431898   67533 retry.go:31] will retry after 1.586906572s: waiting for machine to come up
	I0625 16:51:28.020079   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:28.020504   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find current IP address of domain cert-options-742979 in network mk-cert-options-742979
	I0625 16:51:28.020540   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:28.020454   67533 retry.go:31] will retry after 2.142251558s: waiting for machine to come up
	I0625 16:51:32.583354   66923 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 194847f2c9380af6124ed983007f65e58128fb5bdd636d45ec7fb17ac90a82e2 13384b571d0c9d2a23979f4c01b8a5411372cad68452ac5bea553d6741e463f1 6dd25e227e5d807ed9990002ed7037672dd0da1ef1d88250c968a8766d42dcaa 13cd9489f2bff1c59a51b42e53839dfae42d10a357b12541f00eb7a4ec97ba9e aa8a30e19374b286581df6bb472b9553e70cad7b3fbbb57077e7affe711267fb f8392981aa81906b01a6643948ea2f13fec3c59ee286b445d5d126eee849058a 1b62f76de1db9690b2dd006d497ef166ae51021bccf13a0ed7b7e8601574ec86 80e0348b1f95e3f043d1f89d9695129f935c8c3e864b8d1456d908e765e26e75 a717f02e94c2ea6cf5b363bbeadd206a3cd6a0dc3d12e8203da885738937e7b4 4bf8ce1e315d49ff8e6c3721c3cc6c97a6db113635060332246d168970d4b7eb: (14.852310438s)
	W0625 16:51:32.583433   66923 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 194847f2c9380af6124ed983007f65e58128fb5bdd636d45ec7fb17ac90a82e2 13384b571d0c9d2a23979f4c01b8a5411372cad68452ac5bea553d6741e463f1 6dd25e227e5d807ed9990002ed7037672dd0da1ef1d88250c968a8766d42dcaa 13cd9489f2bff1c59a51b42e53839dfae42d10a357b12541f00eb7a4ec97ba9e aa8a30e19374b286581df6bb472b9553e70cad7b3fbbb57077e7affe711267fb f8392981aa81906b01a6643948ea2f13fec3c59ee286b445d5d126eee849058a 1b62f76de1db9690b2dd006d497ef166ae51021bccf13a0ed7b7e8601574ec86 80e0348b1f95e3f043d1f89d9695129f935c8c3e864b8d1456d908e765e26e75 a717f02e94c2ea6cf5b363bbeadd206a3cd6a0dc3d12e8203da885738937e7b4 4bf8ce1e315d49ff8e6c3721c3cc6c97a6db113635060332246d168970d4b7eb: Process exited with status 1
	stdout:
	194847f2c9380af6124ed983007f65e58128fb5bdd636d45ec7fb17ac90a82e2
	13384b571d0c9d2a23979f4c01b8a5411372cad68452ac5bea553d6741e463f1
	6dd25e227e5d807ed9990002ed7037672dd0da1ef1d88250c968a8766d42dcaa
	13cd9489f2bff1c59a51b42e53839dfae42d10a357b12541f00eb7a4ec97ba9e
	aa8a30e19374b286581df6bb472b9553e70cad7b3fbbb57077e7affe711267fb
	f8392981aa81906b01a6643948ea2f13fec3c59ee286b445d5d126eee849058a
	1b62f76de1db9690b2dd006d497ef166ae51021bccf13a0ed7b7e8601574ec86
	80e0348b1f95e3f043d1f89d9695129f935c8c3e864b8d1456d908e765e26e75
	a717f02e94c2ea6cf5b363bbeadd206a3cd6a0dc3d12e8203da885738937e7b4
	
	stderr:
	E0625 16:51:32.569714    3918 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bf8ce1e315d49ff8e6c3721c3cc6c97a6db113635060332246d168970d4b7eb\": container with ID starting with 4bf8ce1e315d49ff8e6c3721c3cc6c97a6db113635060332246d168970d4b7eb not found: ID does not exist" containerID="4bf8ce1e315d49ff8e6c3721c3cc6c97a6db113635060332246d168970d4b7eb"
	time="2024-06-25T16:51:32Z" level=fatal msg="stopping the container \"4bf8ce1e315d49ff8e6c3721c3cc6c97a6db113635060332246d168970d4b7eb\": rpc error: code = NotFound desc = could not find container \"4bf8ce1e315d49ff8e6c3721c3cc6c97a6db113635060332246d168970d4b7eb\": container with ID starting with 4bf8ce1e315d49ff8e6c3721c3cc6c97a6db113635060332246d168970d4b7eb not found: ID does not exist"
	I0625 16:51:32.583515   66923 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0625 16:51:32.629720   66923 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0625 16:51:32.642150   66923 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 Jun 25 16:50 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Jun 25 16:50 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5759 Jun 25 16:50 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jun 25 16:50 /etc/kubernetes/scheduler.conf
	
	I0625 16:51:32.642226   66923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0625 16:51:32.653766   66923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0625 16:51:32.664892   66923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0625 16:51:32.676419   66923 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:51:32.676468   66923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0625 16:51:32.686692   66923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0625 16:51:32.697570   66923 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:51:32.697610   66923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0625 16:51:32.708972   66923 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0625 16:51:32.720341   66923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0625 16:51:32.789583   66923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0625 16:51:30.164559   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:30.165215   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find current IP address of domain cert-options-742979 in network mk-cert-options-742979
	I0625 16:51:30.165235   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:30.165158   67533 retry.go:31] will retry after 2.479419213s: waiting for machine to come up
	I0625 16:51:32.647679   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:32.648111   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find current IP address of domain cert-options-742979 in network mk-cert-options-742979
	I0625 16:51:32.648130   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:32.648065   67533 retry.go:31] will retry after 3.572445602s: waiting for machine to come up
	I0625 16:51:35.808951   66820 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 afc4673099e5d0c171adb781d9b2890366b76aebd88506fdcd169c982796c793 5d61afd2cc4c8bd0458bb500b4ed4a32ae4210ac11f14960978760413d53aae9 edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e 4711285f965e8c05454daca7fcdcc495b4cdb478f1da0464bbf229ee779c5f2a 7f6fba0c0a9f02a1736519655e7546883e15c7aad2270f2c098353e2a7a73987 7b84409a73e9b5343e7d212348d422ad0b4684236d9e23e22a5625efc3a4cf2f ff262f22a4c47e84a2d88b7d9a5081a6f3b9eb6fa7586edd89ac602bcd13064d a069650666fa8a4ce07a6ec62b130cbe95f58636168b4f2821c41980019572e6 43917e1267c632356c51bbacb32896964f0079a77ec8a33ba35a22dec780e94d: (20.580537481s)
	W0625 16:51:35.809027   66820 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 afc4673099e5d0c171adb781d9b2890366b76aebd88506fdcd169c982796c793 5d61afd2cc4c8bd0458bb500b4ed4a32ae4210ac11f14960978760413d53aae9 edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e 4711285f965e8c05454daca7fcdcc495b4cdb478f1da0464bbf229ee779c5f2a 7f6fba0c0a9f02a1736519655e7546883e15c7aad2270f2c098353e2a7a73987 7b84409a73e9b5343e7d212348d422ad0b4684236d9e23e22a5625efc3a4cf2f ff262f22a4c47e84a2d88b7d9a5081a6f3b9eb6fa7586edd89ac602bcd13064d a069650666fa8a4ce07a6ec62b130cbe95f58636168b4f2821c41980019572e6 43917e1267c632356c51bbacb32896964f0079a77ec8a33ba35a22dec780e94d: Process exited with status 1
	stdout:
	afc4673099e5d0c171adb781d9b2890366b76aebd88506fdcd169c982796c793
	5d61afd2cc4c8bd0458bb500b4ed4a32ae4210ac11f14960978760413d53aae9
	edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c
	a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e
	4711285f965e8c05454daca7fcdcc495b4cdb478f1da0464bbf229ee779c5f2a
	7f6fba0c0a9f02a1736519655e7546883e15c7aad2270f2c098353e2a7a73987
	
	stderr:
	E0625 16:51:35.797027    3104 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b84409a73e9b5343e7d212348d422ad0b4684236d9e23e22a5625efc3a4cf2f\": container with ID starting with 7b84409a73e9b5343e7d212348d422ad0b4684236d9e23e22a5625efc3a4cf2f not found: ID does not exist" containerID="7b84409a73e9b5343e7d212348d422ad0b4684236d9e23e22a5625efc3a4cf2f"
	time="2024-06-25T16:51:35Z" level=fatal msg="stopping the container \"7b84409a73e9b5343e7d212348d422ad0b4684236d9e23e22a5625efc3a4cf2f\": rpc error: code = NotFound desc = could not find container \"7b84409a73e9b5343e7d212348d422ad0b4684236d9e23e22a5625efc3a4cf2f\": container with ID starting with 7b84409a73e9b5343e7d212348d422ad0b4684236d9e23e22a5625efc3a4cf2f not found: ID does not exist"
	I0625 16:51:35.809089   66820 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0625 16:51:35.844566   66820 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0625 16:51:35.855109   66820 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 Jun 25 16:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Jun 25 16:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jun 25 16:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jun 25 16:49 /etc/kubernetes/scheduler.conf
	
	I0625 16:51:35.855184   66820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0625 16:51:35.867166   66820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0625 16:51:35.877159   66820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0625 16:51:35.887384   66820 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:51:35.887424   66820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0625 16:51:35.898043   66820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0625 16:51:35.909861   66820 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:51:35.909904   66820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0625 16:51:35.919228   66820 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0625 16:51:35.928749   66820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0625 16:51:35.991424   66820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0625 16:51:36.896068   66820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0625 16:51:37.132282   66820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0625 16:51:37.223114   66820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0625 16:51:37.292737   66820 api_server.go:52] waiting for apiserver process to appear ...
	I0625 16:51:37.292821   66820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:51:33.902122   66923 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1125091s)
	I0625 16:51:33.902158   66923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0625 16:51:34.143735   66923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0625 16:51:34.226458   66923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0625 16:51:34.338171   66923 api_server.go:52] waiting for apiserver process to appear ...
	I0625 16:51:34.338257   66923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:51:34.839239   66923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:51:35.339079   66923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:51:35.355821   66923 api_server.go:72] duration metric: took 1.017652771s to wait for apiserver process to appear ...
	I0625 16:51:35.355847   66923 api_server.go:88] waiting for apiserver healthz status ...
	I0625 16:51:35.355863   66923 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0625 16:51:36.838052   66923 api_server.go:279] https://192.168.39.64:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0625 16:51:36.838093   66923 api_server.go:103] status: https://192.168.39.64:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0625 16:51:36.838109   66923 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0625 16:51:36.876889   66923 api_server.go:279] https://192.168.39.64:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0625 16:51:36.876925   66923 api_server.go:103] status: https://192.168.39.64:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0625 16:51:36.876956   66923 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0625 16:51:36.899810   66923 api_server.go:279] https://192.168.39.64:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0625 16:51:36.899839   66923 api_server.go:103] status: https://192.168.39.64:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0625 16:51:37.356359   66923 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0625 16:51:37.363251   66923 api_server.go:279] https://192.168.39.64:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0625 16:51:37.363288   66923 api_server.go:103] status: https://192.168.39.64:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0625 16:51:37.855902   66923 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0625 16:51:37.865977   66923 api_server.go:279] https://192.168.39.64:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0625 16:51:37.866006   66923 api_server.go:103] status: https://192.168.39.64:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0625 16:51:38.356367   66923 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0625 16:51:38.360804   66923 api_server.go:279] https://192.168.39.64:8443/healthz returned 200:
	ok
	I0625 16:51:38.367084   66923 api_server.go:141] control plane version: v1.30.2
	I0625 16:51:38.367107   66923 api_server.go:131] duration metric: took 3.011254797s to wait for apiserver health ...
	I0625 16:51:38.367115   66923 cni.go:84] Creating CNI manager for ""
	I0625 16:51:38.367121   66923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 16:51:38.368868   66923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0625 16:51:38.370200   66923 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0625 16:51:38.382313   66923 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0625 16:51:38.405200   66923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0625 16:51:38.417439   66923 system_pods.go:59] 8 kube-system pods found
	I0625 16:51:38.417474   66923 system_pods.go:61] "coredns-7db6d8ff4d-89722" [5cf3f382-88e6-4315-a4d7-a53bdde4b5b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0625 16:51:38.417485   66923 system_pods.go:61] "coredns-7db6d8ff4d-ccplc" [5245073f-76d9-4bb4-a6b5-b38135f49d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0625 16:51:38.417496   66923 system_pods.go:61] "etcd-kubernetes-upgrade-497568" [53e6b9ca-45aa-4442-a1b8-64ad9d8da86a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0625 16:51:38.417505   66923 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-497568" [b873529a-bce2-4bf5-bca1-4e83c64f31b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0625 16:51:38.417522   66923 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-497568" [5623f98e-494f-46f2-bf2b-d84a6a9bfd91] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0625 16:51:38.417532   66923 system_pods.go:61] "kube-proxy-fcfpx" [d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0] Running
	I0625 16:51:38.417537   66923 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-497568" [d21a33db-2252-47bb-8b9a-cafb171f136d] Running
	I0625 16:51:38.417545   66923 system_pods.go:61] "storage-provisioner" [7d703bca-2052-461b-b73a-e2cb459196f4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0625 16:51:38.417552   66923 system_pods.go:74] duration metric: took 12.336048ms to wait for pod list to return data ...
	I0625 16:51:38.417563   66923 node_conditions.go:102] verifying NodePressure condition ...
	I0625 16:51:38.421543   66923 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0625 16:51:38.421574   66923 node_conditions.go:123] node cpu capacity is 2
	I0625 16:51:38.421586   66923 node_conditions.go:105] duration metric: took 4.017879ms to run NodePressure ...
	I0625 16:51:38.421607   66923 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0625 16:51:38.766504   66923 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0625 16:51:38.780688   66923 ops.go:34] apiserver oom_adj: -16
	I0625 16:51:38.780712   66923 kubeadm.go:591] duration metric: took 21.116647447s to restartPrimaryControlPlane
	I0625 16:51:38.780725   66923 kubeadm.go:393] duration metric: took 21.240657902s to StartCluster
	I0625 16:51:38.780746   66923 settings.go:142] acquiring lock: {Name:mk38d7db80b40da56857d65b8e7da05700cdb9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:38.780838   66923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 16:51:38.781978   66923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/kubeconfig: {Name:mk71a37176bd7deadd1f1cd3c756fe56f3b0810d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:38.782188   66923 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 16:51:38.782364   66923 config.go:182] Loaded profile config "kubernetes-upgrade-497568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:51:38.782371   66923 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0625 16:51:38.782452   66923 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-497568"
	I0625 16:51:38.782501   66923 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-497568"
	I0625 16:51:38.782509   66923 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-497568"
	W0625 16:51:38.782519   66923 addons.go:243] addon storage-provisioner should already be in state true
	I0625 16:51:38.782528   66923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-497568"
	I0625 16:51:38.782550   66923 host.go:66] Checking if "kubernetes-upgrade-497568" exists ...
	I0625 16:51:38.782876   66923 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2
	I0625 16:51:38.782907   66923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:51:38.782975   66923 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2
	I0625 16:51:38.783005   66923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:51:38.783706   66923 out.go:177] * Verifying Kubernetes components...
	I0625 16:51:38.785075   66923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:51:38.799051   66923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0625 16:51:38.799570   66923 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:51:38.800142   66923 main.go:141] libmachine: Using API Version  1
	I0625 16:51:38.800165   66923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:51:38.800522   66923 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:51:38.801029   66923 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2
	I0625 16:51:38.801062   66923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:51:38.803141   66923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37191
	I0625 16:51:38.803531   66923 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:51:38.803888   66923 main.go:141] libmachine: Using API Version  1
	I0625 16:51:38.803901   66923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:51:38.804268   66923 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:51:38.804489   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetState
	I0625 16:51:38.807380   66923 kapi.go:59] client config for kubernetes-upgrade-497568: &rest.Config{Host:"https://192.168.39.64:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/client.crt", KeyFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/profiles/kubernetes-upgrade-497568/client.key", CAFile:"/home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0625 16:51:38.807692   66923 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-497568"
	W0625 16:51:38.807708   66923 addons.go:243] addon default-storageclass should already be in state true
	I0625 16:51:38.807736   66923 host.go:66] Checking if "kubernetes-upgrade-497568" exists ...
	I0625 16:51:38.808057   66923 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2
	I0625 16:51:38.808078   66923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:51:38.816955   66923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34705
	I0625 16:51:38.817624   66923 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:51:38.818129   66923 main.go:141] libmachine: Using API Version  1
	I0625 16:51:38.818149   66923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:51:38.818506   66923 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:51:38.818674   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetState
	I0625 16:51:38.820467   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .DriverName
	I0625 16:51:38.822380   66923 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0625 16:51:38.823637   66923 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0625 16:51:38.823649   66923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0625 16:51:38.823662   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHHostname
	I0625 16:51:38.826643   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:51:38.827190   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:51:38.827214   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:51:38.827472   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHPort
	I0625 16:51:38.827765   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:51:38.827906   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHUsername
	I0625 16:51:38.828037   66923 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568/id_rsa Username:docker}
	I0625 16:51:38.828562   66923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39409
	I0625 16:51:38.829144   66923 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:51:38.829586   66923 main.go:141] libmachine: Using API Version  1
	I0625 16:51:38.829608   66923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:51:38.829967   66923 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:51:38.830541   66923 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2
	I0625 16:51:38.830580   66923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:51:38.844850   66923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39127
	I0625 16:51:38.845230   66923 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:51:38.845682   66923 main.go:141] libmachine: Using API Version  1
	I0625 16:51:38.845704   66923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:51:38.846094   66923 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:51:38.846259   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetState
	I0625 16:51:38.847795   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .DriverName
	I0625 16:51:38.848037   66923 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0625 16:51:38.848056   66923 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0625 16:51:38.848081   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHHostname
	I0625 16:51:38.851088   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:51:38.851523   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:19:9e", ip: ""} in network mk-kubernetes-upgrade-497568: {Iface:virbr1 ExpiryTime:2024-06-25 17:45:14 +0000 UTC Type:0 Mac:52:54:00:4f:19:9e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:kubernetes-upgrade-497568 Clientid:01:52:54:00:4f:19:9e}
	I0625 16:51:38.851552   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | domain kubernetes-upgrade-497568 has defined IP address 192.168.39.64 and MAC address 52:54:00:4f:19:9e in network mk-kubernetes-upgrade-497568
	I0625 16:51:38.852029   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHPort
	I0625 16:51:38.852265   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHKeyPath
	I0625 16:51:38.852596   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .GetSSHUsername
	I0625 16:51:38.852731   66923 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/kubernetes-upgrade-497568/id_rsa Username:docker}
	I0625 16:51:38.999891   66923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 16:51:39.021221   66923 api_server.go:52] waiting for apiserver process to appear ...
	I0625 16:51:39.021313   66923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:51:39.038113   66923 api_server.go:72] duration metric: took 255.888751ms to wait for apiserver process to appear ...
	I0625 16:51:39.038143   66923 api_server.go:88] waiting for apiserver healthz status ...
	I0625 16:51:39.038164   66923 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0625 16:51:39.044459   66923 api_server.go:279] https://192.168.39.64:8443/healthz returned 200:
	ok
	I0625 16:51:39.045391   66923 api_server.go:141] control plane version: v1.30.2
	I0625 16:51:39.045412   66923 api_server.go:131] duration metric: took 7.261785ms to wait for apiserver health ...
	I0625 16:51:39.045424   66923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0625 16:51:39.051849   66923 system_pods.go:59] 8 kube-system pods found
	I0625 16:51:39.051882   66923 system_pods.go:61] "coredns-7db6d8ff4d-89722" [5cf3f382-88e6-4315-a4d7-a53bdde4b5b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0625 16:51:39.051891   66923 system_pods.go:61] "coredns-7db6d8ff4d-ccplc" [5245073f-76d9-4bb4-a6b5-b38135f49d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0625 16:51:39.051902   66923 system_pods.go:61] "etcd-kubernetes-upgrade-497568" [53e6b9ca-45aa-4442-a1b8-64ad9d8da86a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0625 16:51:39.051912   66923 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-497568" [b873529a-bce2-4bf5-bca1-4e83c64f31b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0625 16:51:39.051922   66923 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-497568" [5623f98e-494f-46f2-bf2b-d84a6a9bfd91] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0625 16:51:39.051933   66923 system_pods.go:61] "kube-proxy-fcfpx" [d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0] Running
	I0625 16:51:39.051939   66923 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-497568" [d21a33db-2252-47bb-8b9a-cafb171f136d] Running
	I0625 16:51:39.051943   66923 system_pods.go:61] "storage-provisioner" [7d703bca-2052-461b-b73a-e2cb459196f4] Running
	I0625 16:51:39.051954   66923 system_pods.go:74] duration metric: took 6.523101ms to wait for pod list to return data ...
	I0625 16:51:39.051969   66923 kubeadm.go:576] duration metric: took 269.747994ms to wait for: map[apiserver:true system_pods:true]
	I0625 16:51:39.051987   66923 node_conditions.go:102] verifying NodePressure condition ...
	I0625 16:51:39.055633   66923 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0625 16:51:39.055652   66923 node_conditions.go:123] node cpu capacity is 2
	I0625 16:51:39.055663   66923 node_conditions.go:105] duration metric: took 3.669659ms to run NodePressure ...
	I0625 16:51:39.055675   66923 start.go:240] waiting for startup goroutines ...
	I0625 16:51:39.128222   66923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0625 16:51:39.148066   66923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0625 16:51:39.856203   66923 main.go:141] libmachine: Making call to close driver server
	I0625 16:51:39.856245   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .Close
	I0625 16:51:39.856257   66923 main.go:141] libmachine: Making call to close driver server
	I0625 16:51:39.856277   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .Close
	I0625 16:51:39.856586   66923 main.go:141] libmachine: Successfully made call to close driver server
	I0625 16:51:39.856604   66923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 16:51:39.856614   66923 main.go:141] libmachine: Making call to close driver server
	I0625 16:51:39.856623   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .Close
	I0625 16:51:39.856625   66923 main.go:141] libmachine: Successfully made call to close driver server
	I0625 16:51:39.856672   66923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 16:51:39.856688   66923 main.go:141] libmachine: Making call to close driver server
	I0625 16:51:39.856691   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Closing plugin on server side
	I0625 16:51:39.856696   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .Close
	I0625 16:51:39.856898   66923 main.go:141] libmachine: Successfully made call to close driver server
	I0625 16:51:39.856932   66923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 16:51:39.858402   66923 main.go:141] libmachine: Successfully made call to close driver server
	I0625 16:51:39.858417   66923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 16:51:39.858412   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) DBG | Closing plugin on server side
	I0625 16:51:39.864906   66923 main.go:141] libmachine: Making call to close driver server
	I0625 16:51:39.864922   66923 main.go:141] libmachine: (kubernetes-upgrade-497568) Calling .Close
	I0625 16:51:39.865133   66923 main.go:141] libmachine: Successfully made call to close driver server
	I0625 16:51:39.865145   66923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0625 16:51:39.866979   66923 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0625 16:51:39.868280   66923 addons.go:510] duration metric: took 1.085904582s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0625 16:51:39.868321   66923 start.go:245] waiting for cluster config update ...
	I0625 16:51:39.868336   66923 start.go:254] writing updated cluster config ...
	I0625 16:51:39.868626   66923 ssh_runner.go:195] Run: rm -f paused
	I0625 16:51:39.920313   66923 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0625 16:51:39.922092   66923 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-497568" cluster and "default" namespace by default
	I0625 16:51:36.221906   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:36.222399   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find current IP address of domain cert-options-742979 in network mk-cert-options-742979
	I0625 16:51:36.222419   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:36.222354   67533 retry.go:31] will retry after 2.967156617s: waiting for machine to come up
	I0625 16:51:39.191783   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:39.192297   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find current IP address of domain cert-options-742979 in network mk-cert-options-742979
	I0625 16:51:39.192318   67510 main.go:141] libmachine: (cert-options-742979) DBG | I0625 16:51:39.192238   67533 retry.go:31] will retry after 4.091709455s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.651578998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719334300651553777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e467b713-cb2d-4071-b5ca-8fab35463e70 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.652106748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67515c10-1313-4fc7-be86-5a408f72acc0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.652163386Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67515c10-1313-4fc7-be86-5a408f72acc0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.652551847Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c6e5120381d241a1f614ae8ccf792589c8c65968697d8100953aabbd928506,PodSandboxId:8b5cc3687c3422fd850a67359f1345cffcfe1868be66205efbcf858106278867,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334297600084758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf3f382-88e6-4315-a4d7-a53bdde4b5b9,},Annotations:map[string]string{io.kubernetes.container.hash: e5c6b28b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181827be80e23cf0db12421f32e15c0b9106ff64910bcc89040456bf7a9a6703,PodSandboxId:b27e09ac51cb35a4f4118366dd84e105dcd6d35d21c46fe68c2dce17b421b030,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334297594521673,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ccplc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5245073f-76d9-4bb4-a6b5-b38135f49d01,},Annotations:map[string]string{io.kubernetes.container.hash: 7b50572c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a24cf1296d56e1a7ce480df5c60837a0b7747fc84e064541e0196ddb57156b62,PodSandboxId:e0af35edf91de34be5cf6e4cc8e124c939ba7da6ed772d735002c55d012e0784,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1719334297575055914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d703bca-2052-461b-b73a-e2cb459196f4,},Annotations:map[string]string{io.kubernetes.container.hash: 36bcb85,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da2d8e5e05271e417acf33cc3111081755f2e856eaabecca249351be681eb49,PodSandboxId:d5ea9626218c4eff1d102d659b38017eba97af3b68d11cfbd5881f0d6c2ee918,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNIN
G,CreatedAt:1719334294768116475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282933f59338969ac7080f7c297c2d28,},Annotations:map[string]string{io.kubernetes.container.hash: 155cfea6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce113d0dcfda0193ff23c4ba576dff46e889b9aa9d9a8597c6aeee48573ab194,PodSandboxId:30634e1867bd348256f198a0796aa96abb785c111b58365323bab16aaec212ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RU
NNING,CreatedAt:1719334294764945654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7e9a8ad137bb1be2500b260890a7e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e13dd03e688e003dedb84857fd11b2bfdf538149b585c299f579794f29f13a,PodSandboxId:e0af35edf91de34be5cf6e4cc8e124c939ba7da6ed772d735002c55d012e0784,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,
State:CONTAINER_EXITED,CreatedAt:1719334292476854271,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d703bca-2052-461b-b73a-e2cb459196f4,},Annotations:map[string]string{io.kubernetes.container.hash: 36bcb85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2eec6e3c4e05ac9087bf9e4065268db0c54ab9b0a7e51c7fc85c16c8357182b,PodSandboxId:4c7d08ef3c1725a5a8384e380c66a9a89bd4f3960d467ffc39ccae540eeef419,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,Creat
edAt:1719334291465031540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76aa076a48867817fc24fc7b64301700,},Annotations:map[string]string{io.kubernetes.container.hash: 8c22f90b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a458b4aa87d1284e611fa7ae805ebf44bd99abe5c1a2a8bda9308b1be83d844f,PodSandboxId:63a4409f57cb70bafbf40785a9e05072fcc97bf995451101d2cac910dfd96d35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719334290466822423,Labe
ls:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcfpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f9422e5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fdddf32f93cf3f0f081aab0b27d1b6ccd2592b10dd4f59a0fd234b4ac7de910,PodSandboxId:8850614681ae1a1bd5f67535280ff615daa1b2ab41c71b7610acaf73f05f3a49,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719334285243968941,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae702f608c982e45d4f0da8e4e941cd4,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194847f2c9380af6124ed983007f65e58128fb5bdd636d45ec7fb17ac90a82e2,PodSandboxId:b27e09ac51cb35a4f4118366dd84e105dcd6d35d21c46fe68c2dce17b421b030,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334277166218659,Labels:map[string]string{io.kubernetes.containe
r.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ccplc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5245073f-76d9-4bb4-a6b5-b38135f49d01,},Annotations:map[string]string{io.kubernetes.container.hash: 7b50572c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13384b571d0c9d2a23979f4c01b8a5411372cad68452ac5bea553d6741e463f1,PodSandboxId:8b5cc3687c3422fd850a67359f1345cffcfe1868be66205efbcf858106278867,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334277125585879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf3f382-88e6-4315-a4d7-a53bdde4b5b9,},Annotations:map[string]string{io.kubernetes.container.hash: e5c6b28b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8a30e19374b286581df6bb472b9553e70cad7b3fbbb57077e7affe711267fb,PodSandboxId:892e4da39c1d2c4e252e2f72d7a8c1e5ba2c19f058483b7bb6591cce655a45
de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719334273872676625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcfpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f9422e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd25e227e5d807ed9990002ed7037672dd0da1ef1d88250c968a8766d42dcaa,PodSandboxId:cb5e5427fe5d6c25ef9f4f7f7a83e2f8480d04858542a8fcfaae1e68057a2eec,Metadata:&ContainerMetadata{Nam
e:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719334273921118039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282933f59338969ac7080f7c297c2d28,},Annotations:map[string]string{io.kubernetes.container.hash: 155cfea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8392981aa81906b01a6643948ea2f13fec3c59ee286b445d5d126eee849058a,PodSandboxId:0b041a0005fe096fee580a0875f81582f4d491af1b5c6220b81dd5662b57414c,Metadata:&ContainerMetadata{Name:kube
-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719334273819262718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae702f608c982e45d4f0da8e4e941cd4,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e0348b1f95e3f043d1f89d9695129f935c8c3e864b8d1456d908e765e26e75,PodSandboxId:18837eb823d203fe4e991a8017295562b16399a61981f287b610a5c9fe374e80,Metadata:&ContainerMetadata{Name:etcd,Attemp
t:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719334273520041732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76aa076a48867817fc24fc7b64301700,},Annotations:map[string]string{io.kubernetes.container.hash: 8c22f90b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b62f76de1db9690b2dd006d497ef166ae51021bccf13a0ed7b7e8601574ec86,PodSandboxId:55fa488e1421cdd5036d4164360555fda510feb3946a4f1840fabfba574d9e60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Imag
eSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719334273651414986,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7e9a8ad137bb1be2500b260890a7e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67515c10-1313-4fc7-be86-5a408f72acc0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.700370774Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf3a0abf-29b1-494a-8526-0cb048d74abb name=/runtime.v1.RuntimeService/Version
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.700446550Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf3a0abf-29b1-494a-8526-0cb048d74abb name=/runtime.v1.RuntimeService/Version
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.701832496Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d74f3b91-74f7-4339-8ee1-03a9edcd4b04 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.702450243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719334300702424950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d74f3b91-74f7-4339-8ee1-03a9edcd4b04 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.703086922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32b85974-e3e1-4430-a01b-1acfda523001 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.703152974Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32b85974-e3e1-4430-a01b-1acfda523001 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.703633480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c6e5120381d241a1f614ae8ccf792589c8c65968697d8100953aabbd928506,PodSandboxId:8b5cc3687c3422fd850a67359f1345cffcfe1868be66205efbcf858106278867,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334297600084758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf3f382-88e6-4315-a4d7-a53bdde4b5b9,},Annotations:map[string]string{io.kubernetes.container.hash: e5c6b28b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181827be80e23cf0db12421f32e15c0b9106ff64910bcc89040456bf7a9a6703,PodSandboxId:b27e09ac51cb35a4f4118366dd84e105dcd6d35d21c46fe68c2dce17b421b030,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334297594521673,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ccplc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5245073f-76d9-4bb4-a6b5-b38135f49d01,},Annotations:map[string]string{io.kubernetes.container.hash: 7b50572c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a24cf1296d56e1a7ce480df5c60837a0b7747fc84e064541e0196ddb57156b62,PodSandboxId:e0af35edf91de34be5cf6e4cc8e124c939ba7da6ed772d735002c55d012e0784,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1719334297575055914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d703bca-2052-461b-b73a-e2cb459196f4,},Annotations:map[string]string{io.kubernetes.container.hash: 36bcb85,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da2d8e5e05271e417acf33cc3111081755f2e856eaabecca249351be681eb49,PodSandboxId:d5ea9626218c4eff1d102d659b38017eba97af3b68d11cfbd5881f0d6c2ee918,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNIN
G,CreatedAt:1719334294768116475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282933f59338969ac7080f7c297c2d28,},Annotations:map[string]string{io.kubernetes.container.hash: 155cfea6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce113d0dcfda0193ff23c4ba576dff46e889b9aa9d9a8597c6aeee48573ab194,PodSandboxId:30634e1867bd348256f198a0796aa96abb785c111b58365323bab16aaec212ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RU
NNING,CreatedAt:1719334294764945654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7e9a8ad137bb1be2500b260890a7e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e13dd03e688e003dedb84857fd11b2bfdf538149b585c299f579794f29f13a,PodSandboxId:e0af35edf91de34be5cf6e4cc8e124c939ba7da6ed772d735002c55d012e0784,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,
State:CONTAINER_EXITED,CreatedAt:1719334292476854271,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d703bca-2052-461b-b73a-e2cb459196f4,},Annotations:map[string]string{io.kubernetes.container.hash: 36bcb85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2eec6e3c4e05ac9087bf9e4065268db0c54ab9b0a7e51c7fc85c16c8357182b,PodSandboxId:4c7d08ef3c1725a5a8384e380c66a9a89bd4f3960d467ffc39ccae540eeef419,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,Creat
edAt:1719334291465031540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76aa076a48867817fc24fc7b64301700,},Annotations:map[string]string{io.kubernetes.container.hash: 8c22f90b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a458b4aa87d1284e611fa7ae805ebf44bd99abe5c1a2a8bda9308b1be83d844f,PodSandboxId:63a4409f57cb70bafbf40785a9e05072fcc97bf995451101d2cac910dfd96d35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719334290466822423,Labe
ls:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcfpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f9422e5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fdddf32f93cf3f0f081aab0b27d1b6ccd2592b10dd4f59a0fd234b4ac7de910,PodSandboxId:8850614681ae1a1bd5f67535280ff615daa1b2ab41c71b7610acaf73f05f3a49,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719334285243968941,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae702f608c982e45d4f0da8e4e941cd4,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194847f2c9380af6124ed983007f65e58128fb5bdd636d45ec7fb17ac90a82e2,PodSandboxId:b27e09ac51cb35a4f4118366dd84e105dcd6d35d21c46fe68c2dce17b421b030,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334277166218659,Labels:map[string]string{io.kubernetes.containe
r.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ccplc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5245073f-76d9-4bb4-a6b5-b38135f49d01,},Annotations:map[string]string{io.kubernetes.container.hash: 7b50572c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13384b571d0c9d2a23979f4c01b8a5411372cad68452ac5bea553d6741e463f1,PodSandboxId:8b5cc3687c3422fd850a67359f1345cffcfe1868be66205efbcf858106278867,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334277125585879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf3f382-88e6-4315-a4d7-a53bdde4b5b9,},Annotations:map[string]string{io.kubernetes.container.hash: e5c6b28b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8a30e19374b286581df6bb472b9553e70cad7b3fbbb57077e7affe711267fb,PodSandboxId:892e4da39c1d2c4e252e2f72d7a8c1e5ba2c19f058483b7bb6591cce655a45
de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719334273872676625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcfpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f9422e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd25e227e5d807ed9990002ed7037672dd0da1ef1d88250c968a8766d42dcaa,PodSandboxId:cb5e5427fe5d6c25ef9f4f7f7a83e2f8480d04858542a8fcfaae1e68057a2eec,Metadata:&ContainerMetadata{Nam
e:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719334273921118039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282933f59338969ac7080f7c297c2d28,},Annotations:map[string]string{io.kubernetes.container.hash: 155cfea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8392981aa81906b01a6643948ea2f13fec3c59ee286b445d5d126eee849058a,PodSandboxId:0b041a0005fe096fee580a0875f81582f4d491af1b5c6220b81dd5662b57414c,Metadata:&ContainerMetadata{Name:kube
-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719334273819262718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae702f608c982e45d4f0da8e4e941cd4,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e0348b1f95e3f043d1f89d9695129f935c8c3e864b8d1456d908e765e26e75,PodSandboxId:18837eb823d203fe4e991a8017295562b16399a61981f287b610a5c9fe374e80,Metadata:&ContainerMetadata{Name:etcd,Attemp
t:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719334273520041732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76aa076a48867817fc24fc7b64301700,},Annotations:map[string]string{io.kubernetes.container.hash: 8c22f90b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b62f76de1db9690b2dd006d497ef166ae51021bccf13a0ed7b7e8601574ec86,PodSandboxId:55fa488e1421cdd5036d4164360555fda510feb3946a4f1840fabfba574d9e60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Imag
eSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719334273651414986,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7e9a8ad137bb1be2500b260890a7e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32b85974-e3e1-4430-a01b-1acfda523001 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.748471551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa91a232-35de-4576-b9c3-3f5bdf933695 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.748564819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa91a232-35de-4576-b9c3-3f5bdf933695 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.749934203Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7a63f16-884e-4aa6-aac7-6a16b7b8c8f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.750402761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719334300750378290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7a63f16-884e-4aa6-aac7-6a16b7b8c8f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.750898996Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ece3fa0b-3961-4173-aba9-7e010026ca61 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.750972329Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ece3fa0b-3961-4173-aba9-7e010026ca61 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.751281831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c6e5120381d241a1f614ae8ccf792589c8c65968697d8100953aabbd928506,PodSandboxId:8b5cc3687c3422fd850a67359f1345cffcfe1868be66205efbcf858106278867,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334297600084758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf3f382-88e6-4315-a4d7-a53bdde4b5b9,},Annotations:map[string]string{io.kubernetes.container.hash: e5c6b28b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181827be80e23cf0db12421f32e15c0b9106ff64910bcc89040456bf7a9a6703,PodSandboxId:b27e09ac51cb35a4f4118366dd84e105dcd6d35d21c46fe68c2dce17b421b030,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334297594521673,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ccplc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5245073f-76d9-4bb4-a6b5-b38135f49d01,},Annotations:map[string]string{io.kubernetes.container.hash: 7b50572c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a24cf1296d56e1a7ce480df5c60837a0b7747fc84e064541e0196ddb57156b62,PodSandboxId:e0af35edf91de34be5cf6e4cc8e124c939ba7da6ed772d735002c55d012e0784,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1719334297575055914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d703bca-2052-461b-b73a-e2cb459196f4,},Annotations:map[string]string{io.kubernetes.container.hash: 36bcb85,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da2d8e5e05271e417acf33cc3111081755f2e856eaabecca249351be681eb49,PodSandboxId:d5ea9626218c4eff1d102d659b38017eba97af3b68d11cfbd5881f0d6c2ee918,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNIN
G,CreatedAt:1719334294768116475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282933f59338969ac7080f7c297c2d28,},Annotations:map[string]string{io.kubernetes.container.hash: 155cfea6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce113d0dcfda0193ff23c4ba576dff46e889b9aa9d9a8597c6aeee48573ab194,PodSandboxId:30634e1867bd348256f198a0796aa96abb785c111b58365323bab16aaec212ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RU
NNING,CreatedAt:1719334294764945654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7e9a8ad137bb1be2500b260890a7e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e13dd03e688e003dedb84857fd11b2bfdf538149b585c299f579794f29f13a,PodSandboxId:e0af35edf91de34be5cf6e4cc8e124c939ba7da6ed772d735002c55d012e0784,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,
State:CONTAINER_EXITED,CreatedAt:1719334292476854271,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d703bca-2052-461b-b73a-e2cb459196f4,},Annotations:map[string]string{io.kubernetes.container.hash: 36bcb85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2eec6e3c4e05ac9087bf9e4065268db0c54ab9b0a7e51c7fc85c16c8357182b,PodSandboxId:4c7d08ef3c1725a5a8384e380c66a9a89bd4f3960d467ffc39ccae540eeef419,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,Creat
edAt:1719334291465031540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76aa076a48867817fc24fc7b64301700,},Annotations:map[string]string{io.kubernetes.container.hash: 8c22f90b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a458b4aa87d1284e611fa7ae805ebf44bd99abe5c1a2a8bda9308b1be83d844f,PodSandboxId:63a4409f57cb70bafbf40785a9e05072fcc97bf995451101d2cac910dfd96d35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719334290466822423,Labe
ls:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcfpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f9422e5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fdddf32f93cf3f0f081aab0b27d1b6ccd2592b10dd4f59a0fd234b4ac7de910,PodSandboxId:8850614681ae1a1bd5f67535280ff615daa1b2ab41c71b7610acaf73f05f3a49,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719334285243968941,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae702f608c982e45d4f0da8e4e941cd4,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194847f2c9380af6124ed983007f65e58128fb5bdd636d45ec7fb17ac90a82e2,PodSandboxId:b27e09ac51cb35a4f4118366dd84e105dcd6d35d21c46fe68c2dce17b421b030,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334277166218659,Labels:map[string]string{io.kubernetes.containe
r.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ccplc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5245073f-76d9-4bb4-a6b5-b38135f49d01,},Annotations:map[string]string{io.kubernetes.container.hash: 7b50572c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13384b571d0c9d2a23979f4c01b8a5411372cad68452ac5bea553d6741e463f1,PodSandboxId:8b5cc3687c3422fd850a67359f1345cffcfe1868be66205efbcf858106278867,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334277125585879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf3f382-88e6-4315-a4d7-a53bdde4b5b9,},Annotations:map[string]string{io.kubernetes.container.hash: e5c6b28b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8a30e19374b286581df6bb472b9553e70cad7b3fbbb57077e7affe711267fb,PodSandboxId:892e4da39c1d2c4e252e2f72d7a8c1e5ba2c19f058483b7bb6591cce655a45
de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719334273872676625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcfpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f9422e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd25e227e5d807ed9990002ed7037672dd0da1ef1d88250c968a8766d42dcaa,PodSandboxId:cb5e5427fe5d6c25ef9f4f7f7a83e2f8480d04858542a8fcfaae1e68057a2eec,Metadata:&ContainerMetadata{Nam
e:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719334273921118039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282933f59338969ac7080f7c297c2d28,},Annotations:map[string]string{io.kubernetes.container.hash: 155cfea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8392981aa81906b01a6643948ea2f13fec3c59ee286b445d5d126eee849058a,PodSandboxId:0b041a0005fe096fee580a0875f81582f4d491af1b5c6220b81dd5662b57414c,Metadata:&ContainerMetadata{Name:kube
-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719334273819262718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae702f608c982e45d4f0da8e4e941cd4,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e0348b1f95e3f043d1f89d9695129f935c8c3e864b8d1456d908e765e26e75,PodSandboxId:18837eb823d203fe4e991a8017295562b16399a61981f287b610a5c9fe374e80,Metadata:&ContainerMetadata{Name:etcd,Attemp
t:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719334273520041732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76aa076a48867817fc24fc7b64301700,},Annotations:map[string]string{io.kubernetes.container.hash: 8c22f90b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b62f76de1db9690b2dd006d497ef166ae51021bccf13a0ed7b7e8601574ec86,PodSandboxId:55fa488e1421cdd5036d4164360555fda510feb3946a4f1840fabfba574d9e60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Imag
eSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719334273651414986,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7e9a8ad137bb1be2500b260890a7e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ece3fa0b-3961-4173-aba9-7e010026ca61 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.796265541Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21d10a49-e0f5-48c5-9b35-13606d150a3a name=/runtime.v1.RuntimeService/Version
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.796395091Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21d10a49-e0f5-48c5-9b35-13606d150a3a name=/runtime.v1.RuntimeService/Version
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.798440445Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3bd7075-32db-4f4f-ba13-ce03ca0fa5f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.798788459Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719334300798767169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3bd7075-32db-4f4f-ba13-ce03ca0fa5f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.799443720Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f23035b-1b2b-4d23-bb2a-eb6788eee75e name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.799518745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f23035b-1b2b-4d23-bb2a-eb6788eee75e name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:40 kubernetes-upgrade-497568 crio[3026]: time="2024-06-25 16:51:40.799817136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c6e5120381d241a1f614ae8ccf792589c8c65968697d8100953aabbd928506,PodSandboxId:8b5cc3687c3422fd850a67359f1345cffcfe1868be66205efbcf858106278867,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334297600084758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf3f382-88e6-4315-a4d7-a53bdde4b5b9,},Annotations:map[string]string{io.kubernetes.container.hash: e5c6b28b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181827be80e23cf0db12421f32e15c0b9106ff64910bcc89040456bf7a9a6703,PodSandboxId:b27e09ac51cb35a4f4118366dd84e105dcd6d35d21c46fe68c2dce17b421b030,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334297594521673,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ccplc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5245073f-76d9-4bb4-a6b5-b38135f49d01,},Annotations:map[string]string{io.kubernetes.container.hash: 7b50572c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a24cf1296d56e1a7ce480df5c60837a0b7747fc84e064541e0196ddb57156b62,PodSandboxId:e0af35edf91de34be5cf6e4cc8e124c939ba7da6ed772d735002c55d012e0784,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1719334297575055914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d703bca-2052-461b-b73a-e2cb459196f4,},Annotations:map[string]string{io.kubernetes.container.hash: 36bcb85,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da2d8e5e05271e417acf33cc3111081755f2e856eaabecca249351be681eb49,PodSandboxId:d5ea9626218c4eff1d102d659b38017eba97af3b68d11cfbd5881f0d6c2ee918,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNIN
G,CreatedAt:1719334294768116475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282933f59338969ac7080f7c297c2d28,},Annotations:map[string]string{io.kubernetes.container.hash: 155cfea6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce113d0dcfda0193ff23c4ba576dff46e889b9aa9d9a8597c6aeee48573ab194,PodSandboxId:30634e1867bd348256f198a0796aa96abb785c111b58365323bab16aaec212ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RU
NNING,CreatedAt:1719334294764945654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7e9a8ad137bb1be2500b260890a7e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e13dd03e688e003dedb84857fd11b2bfdf538149b585c299f579794f29f13a,PodSandboxId:e0af35edf91de34be5cf6e4cc8e124c939ba7da6ed772d735002c55d012e0784,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,
State:CONTAINER_EXITED,CreatedAt:1719334292476854271,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d703bca-2052-461b-b73a-e2cb459196f4,},Annotations:map[string]string{io.kubernetes.container.hash: 36bcb85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2eec6e3c4e05ac9087bf9e4065268db0c54ab9b0a7e51c7fc85c16c8357182b,PodSandboxId:4c7d08ef3c1725a5a8384e380c66a9a89bd4f3960d467ffc39ccae540eeef419,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,Creat
edAt:1719334291465031540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76aa076a48867817fc24fc7b64301700,},Annotations:map[string]string{io.kubernetes.container.hash: 8c22f90b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a458b4aa87d1284e611fa7ae805ebf44bd99abe5c1a2a8bda9308b1be83d844f,PodSandboxId:63a4409f57cb70bafbf40785a9e05072fcc97bf995451101d2cac910dfd96d35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719334290466822423,Labe
ls:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcfpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f9422e5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fdddf32f93cf3f0f081aab0b27d1b6ccd2592b10dd4f59a0fd234b4ac7de910,PodSandboxId:8850614681ae1a1bd5f67535280ff615daa1b2ab41c71b7610acaf73f05f3a49,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719334285243968941,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae702f608c982e45d4f0da8e4e941cd4,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194847f2c9380af6124ed983007f65e58128fb5bdd636d45ec7fb17ac90a82e2,PodSandboxId:b27e09ac51cb35a4f4118366dd84e105dcd6d35d21c46fe68c2dce17b421b030,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334277166218659,Labels:map[string]string{io.kubernetes.containe
r.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ccplc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5245073f-76d9-4bb4-a6b5-b38135f49d01,},Annotations:map[string]string{io.kubernetes.container.hash: 7b50572c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13384b571d0c9d2a23979f4c01b8a5411372cad68452ac5bea553d6741e463f1,PodSandboxId:8b5cc3687c3422fd850a67359f1345cffcfe1868be66205efbcf858106278867,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334277125585879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf3f382-88e6-4315-a4d7-a53bdde4b5b9,},Annotations:map[string]string{io.kubernetes.container.hash: e5c6b28b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8a30e19374b286581df6bb472b9553e70cad7b3fbbb57077e7affe711267fb,PodSandboxId:892e4da39c1d2c4e252e2f72d7a8c1e5ba2c19f058483b7bb6591cce655a45
de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719334273872676625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcfpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f9422e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd25e227e5d807ed9990002ed7037672dd0da1ef1d88250c968a8766d42dcaa,PodSandboxId:cb5e5427fe5d6c25ef9f4f7f7a83e2f8480d04858542a8fcfaae1e68057a2eec,Metadata:&ContainerMetadata{Nam
e:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719334273921118039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282933f59338969ac7080f7c297c2d28,},Annotations:map[string]string{io.kubernetes.container.hash: 155cfea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8392981aa81906b01a6643948ea2f13fec3c59ee286b445d5d126eee849058a,PodSandboxId:0b041a0005fe096fee580a0875f81582f4d491af1b5c6220b81dd5662b57414c,Metadata:&ContainerMetadata{Name:kube
-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719334273819262718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae702f608c982e45d4f0da8e4e941cd4,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e0348b1f95e3f043d1f89d9695129f935c8c3e864b8d1456d908e765e26e75,PodSandboxId:18837eb823d203fe4e991a8017295562b16399a61981f287b610a5c9fe374e80,Metadata:&ContainerMetadata{Name:etcd,Attemp
t:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719334273520041732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76aa076a48867817fc24fc7b64301700,},Annotations:map[string]string{io.kubernetes.container.hash: 8c22f90b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b62f76de1db9690b2dd006d497ef166ae51021bccf13a0ed7b7e8601574ec86,PodSandboxId:55fa488e1421cdd5036d4164360555fda510feb3946a4f1840fabfba574d9e60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Imag
eSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719334273651414986,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7e9a8ad137bb1be2500b260890a7e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f23035b-1b2b-4d23-bb2a-eb6788eee75e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e7c6e5120381d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   8b5cc3687c342       coredns-7db6d8ff4d-89722
	181827be80e23       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   b27e09ac51cb3       coredns-7db6d8ff4d-ccplc
	a24cf1296d56e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   e0af35edf91de       storage-provisioner
	8da2d8e5e0527       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   6 seconds ago       Running             kube-apiserver            2                   d5ea9626218c4       kube-apiserver-kubernetes-upgrade-497568
	ce113d0dcfda0       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   6 seconds ago       Running             kube-controller-manager   2                   30634e1867bd3       kube-controller-manager-kubernetes-upgrade-497568
	75e13dd03e688       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   8 seconds ago       Exited              storage-provisioner       2                   e0af35edf91de       storage-provisioner
	f2eec6e3c4e05       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 seconds ago       Running             etcd                      2                   4c7d08ef3c172       etcd-kubernetes-upgrade-497568
	a458b4aa87d12       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   10 seconds ago      Running             kube-proxy                2                   63a4409f57cb7       kube-proxy-fcfpx
	6fdddf32f93cf       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   15 seconds ago      Running             kube-scheduler            2                   8850614681ae1       kube-scheduler-kubernetes-upgrade-497568
	194847f2c9380       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   23 seconds ago      Exited              coredns                   1                   b27e09ac51cb3       coredns-7db6d8ff4d-ccplc
	13384b571d0c9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   23 seconds ago      Exited              coredns                   1                   8b5cc3687c342       coredns-7db6d8ff4d-89722
	6dd25e227e5d8       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   26 seconds ago      Exited              kube-apiserver            1                   cb5e5427fe5d6       kube-apiserver-kubernetes-upgrade-497568
	aa8a30e19374b       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   27 seconds ago      Exited              kube-proxy                1                   892e4da39c1d2       kube-proxy-fcfpx
	f8392981aa819       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   27 seconds ago      Exited              kube-scheduler            1                   0b041a0005fe0       kube-scheduler-kubernetes-upgrade-497568
	1b62f76de1db9       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   27 seconds ago      Exited              kube-controller-manager   1                   55fa488e1421c       kube-controller-manager-kubernetes-upgrade-497568
	80e0348b1f95e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   27 seconds ago      Exited              etcd                      1                   18837eb823d20       etcd-kubernetes-upgrade-497568
	
	
	==> coredns [13384b571d0c9d2a23979f4c01b8a5411372cad68452ac5bea553d6741e463f1] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [181827be80e23cf0db12421f32e15c0b9106ff64910bcc89040456bf7a9a6703] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [194847f2c9380af6124ed983007f65e58128fb5bdd636d45ec7fb17ac90a82e2] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e7c6e5120381d241a1f614ae8ccf792589c8c65968697d8100953aabbd928506] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-497568
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-497568
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 16:50:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-497568
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:51:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 16:51:36 +0000   Tue, 25 Jun 2024 16:50:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 16:51:36 +0000   Tue, 25 Jun 2024 16:50:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 16:51:36 +0000   Tue, 25 Jun 2024 16:50:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 16:51:36 +0000   Tue, 25 Jun 2024 16:50:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.64
	  Hostname:    kubernetes-upgrade-497568
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 533b7cdf4a744d9f99c51d63dfd019c1
	  System UUID:                533b7cdf-4a74-4d9f-99c5-1d63dfd019c1
	  Boot ID:                    e58fe84d-11f4-48e2-ae4e-f94ac8fbcc3c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-89722                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     66s
	  kube-system                 coredns-7db6d8ff4d-ccplc                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     67s
	  kube-system                 etcd-kubernetes-upgrade-497568                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         74s
	  kube-system                 kube-apiserver-kubernetes-upgrade-497568             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-497568    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-proxy-fcfpx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-scheduler-kubernetes-upgrade-497568             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 65s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  86s (x8 over 87s)  kubelet          Node kubernetes-upgrade-497568 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s (x8 over 87s)  kubelet          Node kubernetes-upgrade-497568 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s (x7 over 87s)  kubelet          Node kubernetes-upgrade-497568 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           67s                node-controller  Node kubernetes-upgrade-497568 event: Registered Node kubernetes-upgrade-497568 in Controller
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)    kubelet          Node kubernetes-upgrade-497568 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)    kubelet          Node kubernetes-upgrade-497568 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x7 over 7s)    kubelet          Node kubernetes-upgrade-497568 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun25 16:50] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.057376] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065118] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.213588] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.127435] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.305347] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +4.396003] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +0.068762] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.971184] systemd-fstab-generator[869]: Ignoring "noauto" option for root device
	[ +13.330870] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.132747] systemd-fstab-generator[1265]: Ignoring "noauto" option for root device
	[  +7.533635] kauditd_printk_skb: 15 callbacks suppressed
	[Jun25 16:51] systemd-fstab-generator[2208]: Ignoring "noauto" option for root device
	[  +0.091280] kauditd_printk_skb: 76 callbacks suppressed
	[  +0.097539] systemd-fstab-generator[2220]: Ignoring "noauto" option for root device
	[  +0.319094] systemd-fstab-generator[2308]: Ignoring "noauto" option for root device
	[  +0.358720] systemd-fstab-generator[2427]: Ignoring "noauto" option for root device
	[  +1.175174] systemd-fstab-generator[2872]: Ignoring "noauto" option for root device
	[  +1.432084] systemd-fstab-generator[3283]: Ignoring "noauto" option for root device
	[  +9.255873] kauditd_printk_skb: 300 callbacks suppressed
	[  +8.743728] systemd-fstab-generator[4174]: Ignoring "noauto" option for root device
	[  +3.625674] kauditd_printk_skb: 50 callbacks suppressed
	[  +1.204697] systemd-fstab-generator[4605]: Ignoring "noauto" option for root device
	
	
	==> etcd [80e0348b1f95e3f043d1f89d9695129f935c8c3e864b8d1456d908e765e26e75] <==
	{"level":"warn","ts":"2024-06-25T16:51:14.272097Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-25T16:51:14.272108Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.64:2380"]}
	{"level":"info","ts":"2024-06-25T16:51:14.272134Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-25T16:51:14.279859Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.64:2379"]}
	{"level":"info","ts":"2024-06-25T16:51:14.280007Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-497568","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.64:2380"],"listen-peer-urls":["https://192.168.39.64:2380"],"advertise-client-urls":["https://192.168.39.64:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.64:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","i
nitial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-06-25T16:51:14.334443Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"54.255743ms"}
	{"level":"info","ts":"2024-06-25T16:51:14.380693Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-06-25T16:51:14.39316Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","commit-index":409}
	{"level":"info","ts":"2024-06-25T16:51:14.401888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c switched to configuration voters=()"}
	{"level":"info","ts":"2024-06-25T16:51:14.410684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became follower at term 2"}
	{"level":"info","ts":"2024-06-25T16:51:14.410825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 7dcc3547d111063c [peers: [], term: 2, commit: 409, applied: 0, lastindex: 409, lastterm: 2]"}
	{"level":"warn","ts":"2024-06-25T16:51:14.433923Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-06-25T16:51:14.520883Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":393}
	{"level":"info","ts":"2024-06-25T16:51:14.588487Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-06-25T16:51:14.623942Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"7dcc3547d111063c","timeout":"7s"}
	{"level":"info","ts":"2024-06-25T16:51:14.640701Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"7dcc3547d111063c"}
	{"level":"info","ts":"2024-06-25T16:51:14.647472Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"7dcc3547d111063c","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-06-25T16:51:14.65277Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-06-25T16:51:14.653092Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-25T16:51:14.653237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-25T16:51:14.653247Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-25T16:51:14.710665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c switched to configuration voters=(9064678732556469820)"}
	{"level":"info","ts":"2024-06-25T16:51:14.710778Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","added-peer-id":"7dcc3547d111063c","added-peer-peer-urls":["https://192.168.39.64:2380"]}
	{"level":"info","ts":"2024-06-25T16:51:14.710935Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-25T16:51:14.710976Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> etcd [f2eec6e3c4e05ac9087bf9e4065268db0c54ab9b0a7e51c7fc85c16c8357182b] <==
	{"level":"info","ts":"2024-06-25T16:51:31.601033Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-25T16:51:31.601079Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-25T16:51:31.60138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c switched to configuration voters=(9064678732556469820)"}
	{"level":"info","ts":"2024-06-25T16:51:31.601513Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","added-peer-id":"7dcc3547d111063c","added-peer-peer-urls":["https://192.168.39.64:2380"]}
	{"level":"info","ts":"2024-06-25T16:51:31.601664Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-25T16:51:31.601784Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-25T16:51:31.604Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-25T16:51:31.604209Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7dcc3547d111063c","initial-advertise-peer-urls":["https://192.168.39.64:2380"],"listen-peer-urls":["https://192.168.39.64:2380"],"advertise-client-urls":["https://192.168.39.64:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.64:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-25T16:51:31.604266Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-25T16:51:31.604434Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2024-06-25T16:51:31.604476Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2024-06-25T16:51:33.189181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-25T16:51:33.189218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-25T16:51:33.189258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c received MsgPreVoteResp from 7dcc3547d111063c at term 2"}
	{"level":"info","ts":"2024-06-25T16:51:33.189273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became candidate at term 3"}
	{"level":"info","ts":"2024-06-25T16:51:33.189278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c received MsgVoteResp from 7dcc3547d111063c at term 3"}
	{"level":"info","ts":"2024-06-25T16:51:33.189286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became leader at term 3"}
	{"level":"info","ts":"2024-06-25T16:51:33.189358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7dcc3547d111063c elected leader 7dcc3547d111063c at term 3"}
	{"level":"info","ts":"2024-06-25T16:51:33.192635Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7dcc3547d111063c","local-member-attributes":"{Name:kubernetes-upgrade-497568 ClientURLs:[https://192.168.39.64:2379]}","request-path":"/0/members/7dcc3547d111063c/attributes","cluster-id":"c3619ef1effce12d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-25T16:51:33.192799Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-25T16:51:33.192855Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-25T16:51:33.193464Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-25T16:51:33.193507Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-25T16:51:33.197197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-25T16:51:33.200968Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.64:2379"}
	
	
	==> kernel <==
	 16:51:41 up 1 min,  0 users,  load average: 2.07, 0.63, 0.22
	Linux kubernetes-upgrade-497568 5.10.207 #1 SMP Mon Jun 24 21:03:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6dd25e227e5d807ed9990002ed7037672dd0da1ef1d88250c968a8766d42dcaa] <==
	
	
	==> kube-apiserver [8da2d8e5e05271e417acf33cc3111081755f2e856eaabecca249351be681eb49] <==
	I0625 16:51:36.753392       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0625 16:51:36.850634       1 shared_informer.go:320] Caches are synced for configmaps
	I0625 16:51:36.853387       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0625 16:51:36.854269       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0625 16:51:36.854546       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0625 16:51:36.856755       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0625 16:51:36.860739       1 aggregator.go:165] initial CRD sync complete...
	I0625 16:51:36.860770       1 autoregister_controller.go:141] Starting autoregister controller
	I0625 16:51:36.860776       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0625 16:51:36.860781       1 cache.go:39] Caches are synced for autoregister controller
	I0625 16:51:36.886564       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0625 16:51:36.889238       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0625 16:51:36.889915       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0625 16:51:36.890017       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0625 16:51:36.892166       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0625 16:51:36.892209       1 policy_source.go:224] refreshing policies
	I0625 16:51:36.899048       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0625 16:51:36.925092       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0625 16:51:37.776544       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0625 16:51:37.781580       1 controller.go:615] quota admission added evaluator for: endpoints
	I0625 16:51:38.564662       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0625 16:51:38.578150       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0625 16:51:38.621699       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0625 16:51:38.740248       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0625 16:51:38.748650       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [1b62f76de1db9690b2dd006d497ef166ae51021bccf13a0ed7b7e8601574ec86] <==
	
	
	==> kube-controller-manager [ce113d0dcfda0193ff23c4ba576dff46e889b9aa9d9a8597c6aeee48573ab194] <==
	I0625 16:51:38.859158       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0625 16:51:38.859594       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0625 16:51:38.859607       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0625 16:51:38.870394       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0625 16:51:38.870820       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0625 16:51:38.870832       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0625 16:51:38.874821       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0625 16:51:38.874870       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0625 16:51:38.874889       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0625 16:51:38.876785       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0625 16:51:38.876800       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0625 16:51:38.876814       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0625 16:51:38.877767       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0625 16:51:38.877777       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0625 16:51:38.877802       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0625 16:51:38.878647       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0625 16:51:38.878751       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0625 16:51:38.878984       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0625 16:51:38.878776       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0625 16:51:38.883226       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0625 16:51:38.883421       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0625 16:51:38.883675       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0625 16:51:38.887528       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0625 16:51:38.888106       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0625 16:51:38.949684       1 shared_informer.go:320] Caches are synced for tokens
	
	
	==> kube-proxy [a458b4aa87d1284e611fa7ae805ebf44bd99abe5c1a2a8bda9308b1be83d844f] <==
	I0625 16:51:30.576482       1 server_linux.go:69] "Using iptables proxy"
	E0625 16:51:30.581885       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-497568\": dial tcp 192.168.39.64:8443: connect: connection refused"
	E0625 16:51:31.635687       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-497568\": dial tcp 192.168.39.64:8443: connect: connection refused"
	E0625 16:51:33.716801       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-497568\": dial tcp 192.168.39.64:8443: connect: connection refused"
	I0625 16:51:37.943939       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.64"]
	I0625 16:51:37.991485       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0625 16:51:37.991556       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0625 16:51:37.991574       1 server_linux.go:165] "Using iptables Proxier"
	I0625 16:51:37.996502       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0625 16:51:37.996826       1 server.go:872] "Version info" version="v1.30.2"
	I0625 16:51:37.997190       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:51:37.999167       1 config.go:192] "Starting service config controller"
	I0625 16:51:38.000159       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0625 16:51:37.999542       1 config.go:101] "Starting endpoint slice config controller"
	I0625 16:51:38.000373       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0625 16:51:37.999954       1 config.go:319] "Starting node config controller"
	I0625 16:51:38.000492       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0625 16:51:38.100717       1 shared_informer.go:320] Caches are synced for node config
	I0625 16:51:38.100844       1 shared_informer.go:320] Caches are synced for service config
	I0625 16:51:38.100855       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [aa8a30e19374b286581df6bb472b9553e70cad7b3fbbb57077e7affe711267fb] <==
	
	
	==> kube-scheduler [6fdddf32f93cf3f0f081aab0b27d1b6ccd2592b10dd4f59a0fd234b4ac7de910] <==
	W0625 16:51:34.263440       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.64:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	E0625 16:51:34.263476       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.64:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	W0625 16:51:34.470199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.64:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	E0625 16:51:34.470235       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.64:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	W0625 16:51:34.517955       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.64:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	E0625 16:51:34.517991       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.64:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	W0625 16:51:34.568139       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.64:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	E0625 16:51:34.568176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.64:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	W0625 16:51:34.588175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.64:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	E0625 16:51:34.588255       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.64:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	W0625 16:51:34.676734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.64:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	E0625 16:51:34.676769       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.64:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	W0625 16:51:34.851783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.64:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	E0625 16:51:34.851890       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.64:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	W0625 16:51:34.856349       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.64:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	E0625 16:51:34.858388       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.64:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	W0625 16:51:34.933841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.64:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	E0625 16:51:34.933995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.64:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	W0625 16:51:36.835656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0625 16:51:36.835705       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0625 16:51:36.835776       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0625 16:51:36.835785       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0625 16:51:36.836420       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0625 16:51:36.836977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0625 16:51:36.860363       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f8392981aa81906b01a6643948ea2f13fec3c59ee286b445d5d126eee849058a] <==
	
	
	==> kubelet <==
	Jun 25 16:51:34 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:34.747893    4181 scope.go:117] "RemoveContainer" containerID="1b62f76de1db9690b2dd006d497ef166ae51021bccf13a0ed7b7e8601574ec86"
	Jun 25 16:51:34 kubernetes-upgrade-497568 kubelet[4181]: E0625 16:51:34.878126    4181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-497568?timeout=10s\": dial tcp 192.168.39.64:8443: connect: connection refused" interval="800ms"
	Jun 25 16:51:34 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:34.976167    4181 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-497568"
	Jun 25 16:51:34 kubernetes-upgrade-497568 kubelet[4181]: E0625 16:51:34.976984    4181 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.64:8443: connect: connection refused" node="kubernetes-upgrade-497568"
	Jun 25 16:51:35 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:35.779383    4181 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-497568"
	Jun 25 16:51:36 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:36.949518    4181 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-497568"
	Jun 25 16:51:36 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:36.949619    4181 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-497568"
	Jun 25 16:51:36 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:36.951995    4181 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 25 16:51:36 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:36.953992    4181 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: E0625 16:51:37.176667    4181 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-kubernetes-upgrade-497568\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-497568"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:37.243198    4181 apiserver.go:52] "Watching apiserver"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:37.253253    4181 topology_manager.go:215] "Topology Admit Handler" podUID="7d703bca-2052-461b-b73a-e2cb459196f4" podNamespace="kube-system" podName="storage-provisioner"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:37.254284    4181 topology_manager.go:215] "Topology Admit Handler" podUID="d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0" podNamespace="kube-system" podName="kube-proxy-fcfpx"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:37.254599    4181 topology_manager.go:215] "Topology Admit Handler" podUID="5245073f-76d9-4bb4-a6b5-b38135f49d01" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ccplc"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:37.254726    4181 topology_manager.go:215] "Topology Admit Handler" podUID="5cf3f382-88e6-4315-a4d7-a53bdde4b5b9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-89722"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:37.275023    4181 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:37.373923    4181 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7d703bca-2052-461b-b73a-e2cb459196f4-tmp\") pod \"storage-provisioner\" (UID: \"7d703bca-2052-461b-b73a-e2cb459196f4\") " pod="kube-system/storage-provisioner"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:37.374056    4181 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0-xtables-lock\") pod \"kube-proxy-fcfpx\" (UID: \"d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0\") " pod="kube-system/kube-proxy-fcfpx"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:37.374141    4181 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0-lib-modules\") pod \"kube-proxy-fcfpx\" (UID: \"d2a574fa-84c6-41ad-8d0d-8b4e6558a2e0\") " pod="kube-system/kube-proxy-fcfpx"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: E0625 16:51:37.497762    4181 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-497568\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-497568"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: E0625 16:51:37.498416    4181 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-kubernetes-upgrade-497568\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-497568"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:37.555666    4181 scope.go:117] "RemoveContainer" containerID="75e13dd03e688e003dedb84857fd11b2bfdf538149b585c299f579794f29f13a"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:37.555956    4181 scope.go:117] "RemoveContainer" containerID="13384b571d0c9d2a23979f4c01b8a5411372cad68452ac5bea553d6741e463f1"
	Jun 25 16:51:37 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:37.556357    4181 scope.go:117] "RemoveContainer" containerID="194847f2c9380af6124ed983007f65e58128fb5bdd636d45ec7fb17ac90a82e2"
	Jun 25 16:51:40 kubernetes-upgrade-497568 kubelet[4181]: I0625 16:51:40.595153    4181 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [75e13dd03e688e003dedb84857fd11b2bfdf538149b585c299f579794f29f13a] <==
	I0625 16:51:32.594389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0625 16:51:32.596046       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a24cf1296d56e1a7ce480df5c60837a0b7747fc84e064541e0196ddb57156b62] <==
	I0625 16:51:37.727452       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0625 16:51:37.747131       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0625 16:51:37.747222       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0625 16:51:37.791090       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0625 16:51:37.791800       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-497568_6a912a58-77a1-441b-9f12-3ce93babb9df!
	I0625 16:51:37.791913       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"254f7e56-d5ee-4d6f-bdf9-a2edf15ed595", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-497568_6a912a58-77a1-441b-9f12-3ce93babb9df became leader
	I0625 16:51:37.893416       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-497568_6a912a58-77a1-441b-9f12-3ce93babb9df!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-497568 -n kubernetes-upgrade-497568
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-497568 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-497568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-497568
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-497568: (1.103234037s)
--- FAIL: TestKubernetesUpgrade (404.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (95.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-756277 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-756277 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m31.053582677s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-756277] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-756277" primary control-plane node in "pause-756277" cluster
	* Updating the running kvm2 "pause-756277" VM ...
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-756277" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:50:27.451203   66820 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:50:27.451456   66820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:50:27.451464   66820 out.go:304] Setting ErrFile to fd 2...
	I0625 16:50:27.451468   66820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:50:27.451615   66820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:50:27.452086   66820 out.go:298] Setting JSON to false
	I0625 16:50:27.452932   66820 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9171,"bootTime":1719325056,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 16:50:27.452989   66820 start.go:139] virtualization: kvm guest
	I0625 16:50:27.454770   66820 out.go:177] * [pause-756277] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0625 16:50:27.456287   66820 out.go:177]   - MINIKUBE_LOCATION=19128
	I0625 16:50:27.456300   66820 notify.go:220] Checking for updates...
	I0625 16:50:27.458512   66820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 16:50:27.459743   66820 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 16:50:27.461016   66820 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:50:27.462199   66820 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0625 16:50:27.463342   66820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0625 16:50:27.464753   66820 config.go:182] Loaded profile config "pause-756277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:50:27.465137   66820 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2
	I0625 16:50:27.465178   66820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:50:27.479774   66820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I0625 16:50:27.480233   66820 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:50:27.480867   66820 main.go:141] libmachine: Using API Version  1
	I0625 16:50:27.480886   66820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:50:27.481203   66820 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:50:27.481388   66820 main.go:141] libmachine: (pause-756277) Calling .DriverName
	I0625 16:50:27.481628   66820 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 16:50:27.481897   66820 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2
	I0625 16:50:27.481951   66820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:50:27.500899   66820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0625 16:50:27.501310   66820 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:50:27.501806   66820 main.go:141] libmachine: Using API Version  1
	I0625 16:50:27.501836   66820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:50:27.502144   66820 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:50:27.502339   66820 main.go:141] libmachine: (pause-756277) Calling .DriverName
	I0625 16:50:27.538625   66820 out.go:177] * Using the kvm2 driver based on existing profile
	I0625 16:50:27.539743   66820 start.go:297] selected driver: kvm2
	I0625 16:50:27.539757   66820 start.go:901] validating driver "kvm2" against &{Name:pause-756277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.2 ClusterName:pause-756277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:50:27.539872   66820 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0625 16:50:27.540170   66820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:50:27.540229   66820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19128-13846/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0625 16:50:27.555323   66820 install.go:137] /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0625 16:50:27.555953   66820 cni.go:84] Creating CNI manager for ""
	I0625 16:50:27.555966   66820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 16:50:27.556031   66820 start.go:340] cluster config:
	{Name:pause-756277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:pause-756277 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:50:27.556144   66820 iso.go:125] acquiring lock: {Name:mk76df652d5e768afc73443035d5ecb8b75ed16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:50:27.557780   66820 out.go:177] * Starting "pause-756277" primary control-plane node in "pause-756277" cluster
	I0625 16:50:27.558917   66820 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 16:50:27.558959   66820 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0625 16:50:27.558973   66820 cache.go:56] Caching tarball of preloaded images
	I0625 16:50:27.559052   66820 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 16:50:27.559067   66820 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0625 16:50:27.559214   66820 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/pause-756277/config.json ...
	I0625 16:50:27.559458   66820 start.go:360] acquireMachinesLock for pause-756277: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 16:50:57.848540   66820 start.go:364] duration metric: took 30.289055098s to acquireMachinesLock for "pause-756277"
	I0625 16:50:57.848589   66820 start.go:96] Skipping create...Using existing machine configuration
	I0625 16:50:57.848597   66820 fix.go:54] fixHost starting: 
	I0625 16:50:57.848983   66820 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2
	I0625 16:50:57.849026   66820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:50:57.867053   66820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36275
	I0625 16:50:57.867541   66820 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:50:57.868100   66820 main.go:141] libmachine: Using API Version  1
	I0625 16:50:57.868126   66820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:50:57.868468   66820 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:50:57.868669   66820 main.go:141] libmachine: (pause-756277) Calling .DriverName
	I0625 16:50:57.868846   66820 main.go:141] libmachine: (pause-756277) Calling .GetState
	I0625 16:50:57.870641   66820 fix.go:112] recreateIfNeeded on pause-756277: state=Running err=<nil>
	W0625 16:50:57.870670   66820 fix.go:138] unexpected machine state, will restart: <nil>
	I0625 16:50:57.872074   66820 out.go:177] * Updating the running kvm2 "pause-756277" VM ...
	I0625 16:50:57.873198   66820 machine.go:94] provisionDockerMachine start ...
	I0625 16:50:57.873221   66820 main.go:141] libmachine: (pause-756277) Calling .DriverName
	I0625 16:50:57.873419   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHHostname
	I0625 16:50:57.876223   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:50:57.876737   66820 main.go:141] libmachine: (pause-756277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:5f", ip: ""} in network mk-pause-756277: {Iface:virbr3 ExpiryTime:2024-06-25 17:49:04 +0000 UTC Type:0 Mac:52:54:00:84:e8:5f Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-756277 Clientid:01:52:54:00:84:e8:5f}
	I0625 16:50:57.876763   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined IP address 192.168.50.163 and MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:50:57.876920   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHPort
	I0625 16:50:57.877075   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHKeyPath
	I0625 16:50:57.877216   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHKeyPath
	I0625 16:50:57.877359   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHUsername
	I0625 16:50:57.877586   66820 main.go:141] libmachine: Using SSH client type: native
	I0625 16:50:57.877801   66820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I0625 16:50:57.877817   66820 main.go:141] libmachine: About to run SSH command:
	hostname
	I0625 16:50:57.983483   66820 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-756277
	
	I0625 16:50:57.983506   66820 main.go:141] libmachine: (pause-756277) Calling .GetMachineName
	I0625 16:50:57.983777   66820 buildroot.go:166] provisioning hostname "pause-756277"
	I0625 16:50:57.983801   66820 main.go:141] libmachine: (pause-756277) Calling .GetMachineName
	I0625 16:50:57.983986   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHHostname
	I0625 16:50:57.986967   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:50:57.987385   66820 main.go:141] libmachine: (pause-756277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:5f", ip: ""} in network mk-pause-756277: {Iface:virbr3 ExpiryTime:2024-06-25 17:49:04 +0000 UTC Type:0 Mac:52:54:00:84:e8:5f Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-756277 Clientid:01:52:54:00:84:e8:5f}
	I0625 16:50:57.987421   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined IP address 192.168.50.163 and MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:50:57.987511   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHPort
	I0625 16:50:57.987691   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHKeyPath
	I0625 16:50:57.987859   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHKeyPath
	I0625 16:50:57.988005   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHUsername
	I0625 16:50:57.988193   66820 main.go:141] libmachine: Using SSH client type: native
	I0625 16:50:57.988403   66820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I0625 16:50:57.988417   66820 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-756277 && echo "pause-756277" | sudo tee /etc/hostname
	I0625 16:50:58.113665   66820 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-756277
	
	I0625 16:50:58.113711   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHHostname
	I0625 16:50:58.116997   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:50:58.117452   66820 main.go:141] libmachine: (pause-756277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:5f", ip: ""} in network mk-pause-756277: {Iface:virbr3 ExpiryTime:2024-06-25 17:49:04 +0000 UTC Type:0 Mac:52:54:00:84:e8:5f Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-756277 Clientid:01:52:54:00:84:e8:5f}
	I0625 16:50:58.117489   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined IP address 192.168.50.163 and MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:50:58.117762   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHPort
	I0625 16:50:58.117948   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHKeyPath
	I0625 16:50:58.118139   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHKeyPath
	I0625 16:50:58.118290   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHUsername
	I0625 16:50:58.118490   66820 main.go:141] libmachine: Using SSH client type: native
	I0625 16:50:58.118723   66820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I0625 16:50:58.118745   66820 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-756277' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-756277/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-756277' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 16:50:58.232399   66820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 16:50:58.232426   66820 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 16:50:58.232478   66820 buildroot.go:174] setting up certificates
	I0625 16:50:58.232490   66820 provision.go:84] configureAuth start
	I0625 16:50:58.232503   66820 main.go:141] libmachine: (pause-756277) Calling .GetMachineName
	I0625 16:50:58.232781   66820 main.go:141] libmachine: (pause-756277) Calling .GetIP
	I0625 16:50:58.235499   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:50:58.235863   66820 main.go:141] libmachine: (pause-756277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:5f", ip: ""} in network mk-pause-756277: {Iface:virbr3 ExpiryTime:2024-06-25 17:49:04 +0000 UTC Type:0 Mac:52:54:00:84:e8:5f Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-756277 Clientid:01:52:54:00:84:e8:5f}
	I0625 16:50:58.235889   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined IP address 192.168.50.163 and MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:50:58.236070   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHHostname
	I0625 16:50:58.238449   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:50:58.238816   66820 main.go:141] libmachine: (pause-756277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:5f", ip: ""} in network mk-pause-756277: {Iface:virbr3 ExpiryTime:2024-06-25 17:49:04 +0000 UTC Type:0 Mac:52:54:00:84:e8:5f Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-756277 Clientid:01:52:54:00:84:e8:5f}
	I0625 16:50:58.238849   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined IP address 192.168.50.163 and MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:50:58.238954   66820 provision.go:143] copyHostCerts
	I0625 16:50:58.239014   66820 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem, removing ...
	I0625 16:50:58.239023   66820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 16:50:58.239075   66820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 16:50:58.239194   66820 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem, removing ...
	I0625 16:50:58.239208   66820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 16:50:58.239240   66820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 16:50:58.239331   66820 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem, removing ...
	I0625 16:50:58.239340   66820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 16:50:58.239359   66820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 16:50:58.239421   66820 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.pause-756277 san=[127.0.0.1 192.168.50.163 localhost minikube pause-756277]
	I0625 16:50:58.314596   66820 provision.go:177] copyRemoteCerts
	I0625 16:50:58.314660   66820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 16:50:58.314699   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHHostname
	I0625 16:50:58.317679   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:50:58.318130   66820 main.go:141] libmachine: (pause-756277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:5f", ip: ""} in network mk-pause-756277: {Iface:virbr3 ExpiryTime:2024-06-25 17:49:04 +0000 UTC Type:0 Mac:52:54:00:84:e8:5f Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-756277 Clientid:01:52:54:00:84:e8:5f}
	I0625 16:50:58.318162   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined IP address 192.168.50.163 and MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:50:58.318443   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHPort
	I0625 16:50:58.318640   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHKeyPath
	I0625 16:50:58.318779   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHUsername
	I0625 16:50:58.318965   66820 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/pause-756277/id_rsa Username:docker}
	I0625 16:50:58.404559   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 16:50:58.436021   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0625 16:50:58.468177   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0625 16:50:58.497407   66820 provision.go:87] duration metric: took 264.907421ms to configureAuth
	I0625 16:50:58.497432   66820 buildroot.go:189] setting minikube options for container-runtime
	I0625 16:50:58.497670   66820 config.go:182] Loaded profile config "pause-756277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:50:58.497753   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHHostname
	I0625 16:50:58.500492   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:50:58.500922   66820 main.go:141] libmachine: (pause-756277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:5f", ip: ""} in network mk-pause-756277: {Iface:virbr3 ExpiryTime:2024-06-25 17:49:04 +0000 UTC Type:0 Mac:52:54:00:84:e8:5f Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-756277 Clientid:01:52:54:00:84:e8:5f}
	I0625 16:50:58.500978   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined IP address 192.168.50.163 and MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:50:58.501158   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHPort
	I0625 16:50:58.501336   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHKeyPath
	I0625 16:50:58.501529   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHKeyPath
	I0625 16:50:58.501720   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHUsername
	I0625 16:50:58.501873   66820 main.go:141] libmachine: Using SSH client type: native
	I0625 16:50:58.502083   66820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I0625 16:50:58.502106   66820 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 16:51:04.125193   66820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 16:51:04.125219   66820 machine.go:97] duration metric: took 6.252003771s to provisionDockerMachine
	I0625 16:51:04.125232   66820 start.go:293] postStartSetup for "pause-756277" (driver="kvm2")
	I0625 16:51:04.125245   66820 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 16:51:04.125275   66820 main.go:141] libmachine: (pause-756277) Calling .DriverName
	I0625 16:51:04.125581   66820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 16:51:04.125605   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHHostname
	I0625 16:51:04.128640   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:51:04.129056   66820 main.go:141] libmachine: (pause-756277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:5f", ip: ""} in network mk-pause-756277: {Iface:virbr3 ExpiryTime:2024-06-25 17:49:04 +0000 UTC Type:0 Mac:52:54:00:84:e8:5f Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-756277 Clientid:01:52:54:00:84:e8:5f}
	I0625 16:51:04.129085   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined IP address 192.168.50.163 and MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:51:04.129310   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHPort
	I0625 16:51:04.129546   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHKeyPath
	I0625 16:51:04.129744   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHUsername
	I0625 16:51:04.130046   66820 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/pause-756277/id_rsa Username:docker}
	I0625 16:51:04.219476   66820 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 16:51:04.225812   66820 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 16:51:04.225888   66820 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 16:51:04.225949   66820 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 16:51:04.226061   66820 filesync.go:149] local asset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> 212392.pem in /etc/ssl/certs
	I0625 16:51:04.226197   66820 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0625 16:51:04.237465   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:51:04.264292   66820 start.go:296] duration metric: took 139.043248ms for postStartSetup
	I0625 16:51:04.264347   66820 fix.go:56] duration metric: took 6.415747425s for fixHost
	I0625 16:51:04.264373   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHHostname
	I0625 16:51:04.267508   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:51:04.267929   66820 main.go:141] libmachine: (pause-756277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:5f", ip: ""} in network mk-pause-756277: {Iface:virbr3 ExpiryTime:2024-06-25 17:49:04 +0000 UTC Type:0 Mac:52:54:00:84:e8:5f Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-756277 Clientid:01:52:54:00:84:e8:5f}
	I0625 16:51:04.267955   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined IP address 192.168.50.163 and MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:51:04.268185   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHPort
	I0625 16:51:04.268420   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHKeyPath
	I0625 16:51:04.268607   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHKeyPath
	I0625 16:51:04.268750   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHUsername
	I0625 16:51:04.268911   66820 main.go:141] libmachine: Using SSH client type: native
	I0625 16:51:04.269079   66820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I0625 16:51:04.269089   66820 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0625 16:51:04.384533   66820 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719334264.374657193
	
	I0625 16:51:04.384555   66820 fix.go:216] guest clock: 1719334264.374657193
	I0625 16:51:04.384566   66820 fix.go:229] Guest: 2024-06-25 16:51:04.374657193 +0000 UTC Remote: 2024-06-25 16:51:04.26435173 +0000 UTC m=+36.850256489 (delta=110.305463ms)
	I0625 16:51:04.384615   66820 fix.go:200] guest clock delta is within tolerance: 110.305463ms
	I0625 16:51:04.384626   66820 start.go:83] releasing machines lock for "pause-756277", held for 6.536058167s
	I0625 16:51:04.384656   66820 main.go:141] libmachine: (pause-756277) Calling .DriverName
	I0625 16:51:04.384913   66820 main.go:141] libmachine: (pause-756277) Calling .GetIP
	I0625 16:51:04.388304   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:51:04.388760   66820 main.go:141] libmachine: (pause-756277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:5f", ip: ""} in network mk-pause-756277: {Iface:virbr3 ExpiryTime:2024-06-25 17:49:04 +0000 UTC Type:0 Mac:52:54:00:84:e8:5f Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-756277 Clientid:01:52:54:00:84:e8:5f}
	I0625 16:51:04.388796   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined IP address 192.168.50.163 and MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:51:04.388971   66820 main.go:141] libmachine: (pause-756277) Calling .DriverName
	I0625 16:51:04.389656   66820 main.go:141] libmachine: (pause-756277) Calling .DriverName
	I0625 16:51:04.389847   66820 main.go:141] libmachine: (pause-756277) Calling .DriverName
	I0625 16:51:04.389929   66820 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 16:51:04.389991   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHHostname
	I0625 16:51:04.390107   66820 ssh_runner.go:195] Run: cat /version.json
	I0625 16:51:04.390127   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHHostname
	I0625 16:51:04.393459   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:51:04.393760   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:51:04.393834   66820 main.go:141] libmachine: (pause-756277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:5f", ip: ""} in network mk-pause-756277: {Iface:virbr3 ExpiryTime:2024-06-25 17:49:04 +0000 UTC Type:0 Mac:52:54:00:84:e8:5f Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-756277 Clientid:01:52:54:00:84:e8:5f}
	I0625 16:51:04.393866   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined IP address 192.168.50.163 and MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:51:04.393995   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHPort
	I0625 16:51:04.394224   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHKeyPath
	I0625 16:51:04.394313   66820 main.go:141] libmachine: (pause-756277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:5f", ip: ""} in network mk-pause-756277: {Iface:virbr3 ExpiryTime:2024-06-25 17:49:04 +0000 UTC Type:0 Mac:52:54:00:84:e8:5f Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-756277 Clientid:01:52:54:00:84:e8:5f}
	I0625 16:51:04.394336   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined IP address 192.168.50.163 and MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:51:04.394366   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHUsername
	I0625 16:51:04.394409   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHPort
	I0625 16:51:04.394545   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHKeyPath
	I0625 16:51:04.394558   66820 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/pause-756277/id_rsa Username:docker}
	I0625 16:51:04.394725   66820 main.go:141] libmachine: (pause-756277) Calling .GetSSHUsername
	I0625 16:51:04.394865   66820 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/pause-756277/id_rsa Username:docker}
	I0625 16:51:04.481474   66820 ssh_runner.go:195] Run: systemctl --version
	I0625 16:51:04.501558   66820 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 16:51:04.670831   66820 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0625 16:51:04.680045   66820 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 16:51:04.680112   66820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 16:51:04.690349   66820 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0625 16:51:04.690370   66820 start.go:494] detecting cgroup driver to use...
	I0625 16:51:04.690426   66820 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 16:51:04.711571   66820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 16:51:04.726454   66820 docker.go:217] disabling cri-docker service (if available) ...
	I0625 16:51:04.726537   66820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 16:51:04.743335   66820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 16:51:04.761093   66820 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 16:51:04.943404   66820 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 16:51:05.111496   66820 docker.go:233] disabling docker service ...
	I0625 16:51:05.111570   66820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 16:51:05.133517   66820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 16:51:05.148528   66820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 16:51:05.307822   66820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 16:51:05.466183   66820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 16:51:05.482897   66820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 16:51:05.503750   66820 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0625 16:51:05.503863   66820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:05.515027   66820 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 16:51:05.515094   66820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:05.526487   66820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:05.538018   66820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:05.548841   66820 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 16:51:05.560565   66820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:05.571926   66820 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:05.584624   66820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:05.597517   66820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 16:51:05.608174   66820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 16:51:05.619157   66820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:51:05.757340   66820 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 16:51:12.862240   66820 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.104856927s)
	I0625 16:51:12.862296   66820 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 16:51:12.862350   66820 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 16:51:12.869737   66820 start.go:562] Will wait 60s for crictl version
	I0625 16:51:12.869802   66820 ssh_runner.go:195] Run: which crictl
	I0625 16:51:12.874272   66820 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 16:51:12.919418   66820 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 16:51:12.919513   66820 ssh_runner.go:195] Run: crio --version
	I0625 16:51:12.954098   66820 ssh_runner.go:195] Run: crio --version
	I0625 16:51:12.995924   66820 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0625 16:51:12.997332   66820 main.go:141] libmachine: (pause-756277) Calling .GetIP
	I0625 16:51:13.000396   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:51:13.000832   66820 main.go:141] libmachine: (pause-756277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:5f", ip: ""} in network mk-pause-756277: {Iface:virbr3 ExpiryTime:2024-06-25 17:49:04 +0000 UTC Type:0 Mac:52:54:00:84:e8:5f Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-756277 Clientid:01:52:54:00:84:e8:5f}
	I0625 16:51:13.000868   66820 main.go:141] libmachine: (pause-756277) DBG | domain pause-756277 has defined IP address 192.168.50.163 and MAC address 52:54:00:84:e8:5f in network mk-pause-756277
	I0625 16:51:13.001221   66820 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0625 16:51:13.006556   66820 kubeadm.go:877] updating cluster {Name:pause-756277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:pause-756277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0625 16:51:13.006760   66820 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 16:51:13.006830   66820 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:51:13.063069   66820 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 16:51:13.063094   66820 crio.go:433] Images already preloaded, skipping extraction
	I0625 16:51:13.063158   66820 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:51:13.108968   66820 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 16:51:13.108991   66820 cache_images.go:84] Images are preloaded, skipping loading
	I0625 16:51:13.109001   66820 kubeadm.go:928] updating node { 192.168.50.163 8443 v1.30.2 crio true true} ...
	I0625 16:51:13.109091   66820 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-756277 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:pause-756277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 16:51:13.109153   66820 ssh_runner.go:195] Run: crio config
	I0625 16:51:13.177692   66820 cni.go:84] Creating CNI manager for ""
	I0625 16:51:13.177716   66820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 16:51:13.177728   66820 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0625 16:51:13.177758   66820 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.163 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-756277 NodeName:pause-756277 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0625 16:51:13.177936   66820 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-756277"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0625 16:51:13.178007   66820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0625 16:51:13.193286   66820 binaries.go:44] Found k8s binaries, skipping transfer
	I0625 16:51:13.193360   66820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0625 16:51:13.207610   66820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0625 16:51:13.233197   66820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 16:51:13.256068   66820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0625 16:51:13.277739   66820 ssh_runner.go:195] Run: grep 192.168.50.163	control-plane.minikube.internal$ /etc/hosts
	I0625 16:51:13.283409   66820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:51:13.485874   66820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 16:51:13.508933   66820 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/pause-756277 for IP: 192.168.50.163
	I0625 16:51:13.508960   66820 certs.go:194] generating shared ca certs ...
	I0625 16:51:13.508978   66820 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:13.509136   66820 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 16:51:13.509233   66820 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 16:51:13.509250   66820 certs.go:256] generating profile certs ...
	I0625 16:51:13.509356   66820 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/pause-756277/client.key
	I0625 16:51:13.509425   66820 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/pause-756277/apiserver.key.ed58e786
	I0625 16:51:13.509479   66820 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/pause-756277/proxy-client.key
	I0625 16:51:13.509621   66820 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem (1338 bytes)
	W0625 16:51:13.509662   66820 certs.go:480] ignoring /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239_empty.pem, impossibly tiny 0 bytes
	I0625 16:51:13.509676   66820 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 16:51:13.509706   66820 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 16:51:13.509737   66820 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 16:51:13.509769   66820 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 16:51:13.509823   66820 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:51:13.510649   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 16:51:13.547878   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 16:51:13.577802   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 16:51:13.606317   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 16:51:13.643589   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/pause-756277/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0625 16:51:13.678485   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/pause-756277/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0625 16:51:13.715768   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/pause-756277/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 16:51:13.755993   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/pause-756277/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0625 16:51:13.794249   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem --> /usr/share/ca-certificates/21239.pem (1338 bytes)
	I0625 16:51:13.829420   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /usr/share/ca-certificates/212392.pem (1708 bytes)
	I0625 16:51:13.893234   66820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 16:51:13.923978   66820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0625 16:51:13.971300   66820 ssh_runner.go:195] Run: openssl version
	I0625 16:51:13.983059   66820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21239.pem && ln -fs /usr/share/ca-certificates/21239.pem /etc/ssl/certs/21239.pem"
	I0625 16:51:14.012816   66820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21239.pem
	I0625 16:51:14.027145   66820 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 16:51:14.027252   66820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21239.pem
	I0625 16:51:14.044122   66820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21239.pem /etc/ssl/certs/51391683.0"
	I0625 16:51:14.163329   66820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212392.pem && ln -fs /usr/share/ca-certificates/212392.pem /etc/ssl/certs/212392.pem"
	I0625 16:51:14.210661   66820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212392.pem
	I0625 16:51:14.233217   66820 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 16:51:14.233289   66820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212392.pem
	I0625 16:51:14.279303   66820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/212392.pem /etc/ssl/certs/3ec20f2e.0"
	I0625 16:51:14.318595   66820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 16:51:14.343449   66820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:51:14.356355   66820 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:51:14.356429   66820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:51:14.376280   66820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 16:51:14.435547   66820 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 16:51:14.447846   66820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0625 16:51:14.497869   66820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0625 16:51:14.519854   66820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0625 16:51:14.604953   66820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0625 16:51:14.649879   66820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0625 16:51:14.672298   66820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0625 16:51:14.708370   66820 kubeadm.go:391] StartCluster: {Name:pause-756277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:pause-756277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:51:14.708504   66820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0625 16:51:14.708600   66820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0625 16:51:14.909608   66820 cri.go:89] found id: "edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c"
	I0625 16:51:14.909641   66820 cri.go:89] found id: "a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e"
	I0625 16:51:14.909648   66820 cri.go:89] found id: "4711285f965e8c05454daca7fcdcc495b4cdb478f1da0464bbf229ee779c5f2a"
	I0625 16:51:14.909653   66820 cri.go:89] found id: "7f6fba0c0a9f02a1736519655e7546883e15c7aad2270f2c098353e2a7a73987"
	I0625 16:51:14.909657   66820 cri.go:89] found id: "7b84409a73e9b5343e7d212348d422ad0b4684236d9e23e22a5625efc3a4cf2f"
	I0625 16:51:14.909662   66820 cri.go:89] found id: "ff262f22a4c47e84a2d88b7d9a5081a6f3b9eb6fa7586edd89ac602bcd13064d"
	I0625 16:51:14.909666   66820 cri.go:89] found id: "a069650666fa8a4ce07a6ec62b130cbe95f58636168b4f2821c41980019572e6"
	I0625 16:51:14.909669   66820 cri.go:89] found id: "43917e1267c632356c51bbacb32896964f0079a77ec8a33ba35a22dec780e94d"
	I0625 16:51:14.909673   66820 cri.go:89] found id: ""
	I0625 16:51:14.909722   66820 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-756277 -n pause-756277
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-756277 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-756277 logs -n 25: (1.388570177s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                    |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-514698 sudo cat                  | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                      | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | cri-dockerd --version                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                      | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | systemctl status containerd                |                           |         |         |                     |                     |
	|         | --all --full --no-pager                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                      | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | systemctl cat containerd                   |                           |         |         |                     |                     |
	|         | --no-pager                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo cat                  | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | /lib/systemd/system/containerd.service     |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo cat                  | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | /etc/containerd/config.toml                |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                      | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | containerd config dump                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                      | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | systemctl status crio --all                |                           |         |         |                     |                     |
	|         | --full --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                      | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | systemctl cat crio --no-pager              |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo find                 | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | /etc/crio -type f -exec sh -c              |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                       |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo crio                 | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | config                                     |                           |         |         |                     |                     |
	| delete  | -p cilium-514698                           | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC | 25 Jun 24 16:49 UTC |
	| stop    | -p kubernetes-upgrade-497568               | kubernetes-upgrade-497568 | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC | 25 Jun 24 16:49 UTC |
	| start   | -p kubernetes-upgrade-497568               | kubernetes-upgrade-497568 | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC | 25 Jun 24 16:50 UTC |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p cert-expiration-076008                  | cert-expiration-076008    | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC | 25 Jun 24 16:50 UTC |
	|         | --memory=2048                              |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                       |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-759584                | force-systemd-env-759584  | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC | 25 Jun 24 16:50 UTC |
	| start   | -p force-systemd-flag-740596               | force-systemd-flag-740596 | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC | 25 Jun 24 16:51 UTC |
	|         | --memory=2048 --force-systemd              |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p pause-756277                            | pause-756277              | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC | 25 Jun 24 16:51 UTC |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-497568               | kubernetes-upgrade-497568 | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0               |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-497568               | kubernetes-upgrade-497568 | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC | 25 Jun 24 16:51 UTC |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-740596 ssh cat          | force-systemd-flag-740596 | jenkins | v1.33.1 | 25 Jun 24 16:51 UTC | 25 Jun 24 16:51 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf         |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-740596               | force-systemd-flag-740596 | jenkins | v1.33.1 | 25 Jun 24 16:51 UTC | 25 Jun 24 16:51 UTC |
	| start   | -p cert-options-742979                     | cert-options-742979       | jenkins | v1.33.1 | 25 Jun 24 16:51 UTC |                     |
	|         | --memory=2048                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                  |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15              |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com           |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                      |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-497568               | kubernetes-upgrade-497568 | jenkins | v1.33.1 | 25 Jun 24 16:51 UTC | 25 Jun 24 16:51 UTC |
	| start   | -p old-k8s-version-462347                  | old-k8s-version-462347    | jenkins | v1.33.1 | 25 Jun 24 16:51 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true              |                           |         |         |                     |                     |
	|         | --kvm-network=default                      |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system              |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                    |                           |         |         |                     |                     |
	|         | --keep-context=false                       |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0               |                           |         |         |                     |                     |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/25 16:51:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0625 16:51:43.429508   67969 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:51:43.429718   67969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:51:43.429727   67969 out.go:304] Setting ErrFile to fd 2...
	I0625 16:51:43.429731   67969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:51:43.429912   67969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:51:43.430412   67969 out.go:298] Setting JSON to false
	I0625 16:51:43.431354   67969 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9247,"bootTime":1719325056,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 16:51:43.431408   67969 start.go:139] virtualization: kvm guest
	I0625 16:51:43.433659   67969 out.go:177] * [old-k8s-version-462347] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0625 16:51:43.435144   67969 out.go:177]   - MINIKUBE_LOCATION=19128
	I0625 16:51:43.435162   67969 notify.go:220] Checking for updates...
	I0625 16:51:43.437619   67969 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 16:51:43.438921   67969 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 16:51:43.440254   67969 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:51:43.441509   67969 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0625 16:51:43.442912   67969 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0625 16:51:43.444449   67969 config.go:182] Loaded profile config "cert-expiration-076008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:51:43.444562   67969 config.go:182] Loaded profile config "cert-options-742979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:51:43.444719   67969 config.go:182] Loaded profile config "pause-756277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:51:43.444814   67969 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 16:51:43.480208   67969 out.go:177] * Using the kvm2 driver based on user configuration
	I0625 16:51:43.481426   67969 start.go:297] selected driver: kvm2
	I0625 16:51:43.481441   67969 start.go:901] validating driver "kvm2" against <nil>
	I0625 16:51:43.481455   67969 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0625 16:51:43.482141   67969 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:51:43.482201   67969 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19128-13846/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0625 16:51:43.497309   67969 install.go:137] /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0625 16:51:43.497376   67969 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0625 16:51:43.497597   67969 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 16:51:43.497667   67969 cni.go:84] Creating CNI manager for ""
	I0625 16:51:43.497684   67969 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 16:51:43.497696   67969 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0625 16:51:43.497755   67969 start.go:340] cluster config:
	{Name:old-k8s-version-462347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-462347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:51:43.497881   67969 iso.go:125] acquiring lock: {Name:mk76df652d5e768afc73443035d5ecb8b75ed16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:51:43.500654   67969 out.go:177] * Starting "old-k8s-version-462347" primary control-plane node in "old-k8s-version-462347" cluster
	I0625 16:51:43.501901   67969 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0625 16:51:43.501939   67969 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0625 16:51:43.501952   67969 cache.go:56] Caching tarball of preloaded images
	I0625 16:51:43.502044   67969 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 16:51:43.502058   67969 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0625 16:51:43.502169   67969 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/old-k8s-version-462347/config.json ...
	I0625 16:51:43.502190   67969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/old-k8s-version-462347/config.json: {Name:mk65a4e524b9b7230e9ec3336d3ee84ebe9e5eda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:43.502342   67969 start.go:360] acquireMachinesLock for old-k8s-version-462347: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 16:51:44.924185   67969 start.go:364] duration metric: took 1.421787034s to acquireMachinesLock for "old-k8s-version-462347"
	I0625 16:51:44.924248   67969 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-462347 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-462347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 16:51:44.924356   67969 start.go:125] createHost starting for "" (driver="kvm2")
	I0625 16:51:43.287403   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.288096   67510 main.go:141] libmachine: (cert-options-742979) Found IP for machine: 192.168.83.28
	I0625 16:51:43.288117   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has current primary IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.288125   67510 main.go:141] libmachine: (cert-options-742979) Reserving static IP address...
	I0625 16:51:43.288490   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find host DHCP lease matching {name: "cert-options-742979", mac: "52:54:00:b5:c8:1f", ip: "192.168.83.28"} in network mk-cert-options-742979
	I0625 16:51:43.363354   67510 main.go:141] libmachine: (cert-options-742979) DBG | Getting to WaitForSSH function...
	I0625 16:51:43.363370   67510 main.go:141] libmachine: (cert-options-742979) Reserved static IP address: 192.168.83.28
	I0625 16:51:43.363380   67510 main.go:141] libmachine: (cert-options-742979) Waiting for SSH to be available...
	I0625 16:51:43.366211   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.366693   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:43.366709   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.366845   67510 main.go:141] libmachine: (cert-options-742979) DBG | Using SSH client type: external
	I0625 16:51:43.366859   67510 main.go:141] libmachine: (cert-options-742979) DBG | Using SSH private key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/id_rsa (-rw-------)
	I0625 16:51:43.366892   67510 main.go:141] libmachine: (cert-options-742979) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0625 16:51:43.366900   67510 main.go:141] libmachine: (cert-options-742979) DBG | About to run SSH command:
	I0625 16:51:43.366910   67510 main.go:141] libmachine: (cert-options-742979) DBG | exit 0
	I0625 16:51:43.498776   67510 main.go:141] libmachine: (cert-options-742979) DBG | SSH cmd err, output: <nil>: 
	I0625 16:51:43.498992   67510 main.go:141] libmachine: (cert-options-742979) KVM machine creation complete!
	I0625 16:51:43.499338   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetConfigRaw
	I0625 16:51:43.500318   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:43.500526   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:43.500669   67510 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0625 16:51:43.500679   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetState
	I0625 16:51:43.502386   67510 main.go:141] libmachine: Detecting operating system of created instance...
	I0625 16:51:43.502395   67510 main.go:141] libmachine: Waiting for SSH to be available...
	I0625 16:51:43.502401   67510 main.go:141] libmachine: Getting to WaitForSSH function...
	I0625 16:51:43.502408   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:43.504645   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.505005   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:43.505040   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.505148   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:43.505306   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.505443   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.505553   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:43.505696   67510 main.go:141] libmachine: Using SSH client type: native
	I0625 16:51:43.505859   67510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.83.28 22 <nil> <nil>}
	I0625 16:51:43.505864   67510 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0625 16:51:43.609541   67510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 16:51:43.609550   67510 main.go:141] libmachine: Detecting the provisioner...
	I0625 16:51:43.609556   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:43.612297   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.612650   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:43.612672   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.612813   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:43.612943   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.613068   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.613159   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:43.613305   67510 main.go:141] libmachine: Using SSH client type: native
	I0625 16:51:43.613453   67510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.83.28 22 <nil> <nil>}
	I0625 16:51:43.613458   67510 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0625 16:51:43.722803   67510 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0625 16:51:43.722855   67510 main.go:141] libmachine: found compatible host: buildroot
	I0625 16:51:43.722860   67510 main.go:141] libmachine: Provisioning with buildroot...
	I0625 16:51:43.722866   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetMachineName
	I0625 16:51:43.723096   67510 buildroot.go:166] provisioning hostname "cert-options-742979"
	I0625 16:51:43.723128   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetMachineName
	I0625 16:51:43.723295   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:43.725903   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.726254   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:43.726275   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.726386   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:43.726572   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.726713   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.726803   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:43.726940   67510 main.go:141] libmachine: Using SSH client type: native
	I0625 16:51:43.727112   67510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.83.28 22 <nil> <nil>}
	I0625 16:51:43.727118   67510 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-742979 && echo "cert-options-742979" | sudo tee /etc/hostname
	I0625 16:51:43.849085   67510 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-742979
	
	I0625 16:51:43.849104   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:43.851870   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.852288   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:43.852308   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.852490   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:43.852679   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.852819   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.852986   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:43.853137   67510 main.go:141] libmachine: Using SSH client type: native
	I0625 16:51:43.853331   67510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.83.28 22 <nil> <nil>}
	I0625 16:51:43.853342   67510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-742979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-742979/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-742979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 16:51:43.966932   67510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 16:51:43.966952   67510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 16:51:43.966986   67510 buildroot.go:174] setting up certificates
	I0625 16:51:43.966997   67510 provision.go:84] configureAuth start
	I0625 16:51:43.967005   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetMachineName
	I0625 16:51:43.967272   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetIP
	I0625 16:51:43.969653   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.970018   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:43.970044   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.970183   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:43.972392   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.972697   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:43.972717   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.972858   67510 provision.go:143] copyHostCerts
	I0625 16:51:43.972916   67510 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem, removing ...
	I0625 16:51:43.972928   67510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 16:51:43.972988   67510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 16:51:43.973075   67510 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem, removing ...
	I0625 16:51:43.973078   67510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 16:51:43.973099   67510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 16:51:43.973158   67510 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem, removing ...
	I0625 16:51:43.973160   67510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 16:51:43.973184   67510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 16:51:43.973237   67510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.cert-options-742979 san=[127.0.0.1 192.168.83.28 cert-options-742979 localhost minikube]
	I0625 16:51:44.226766   67510 provision.go:177] copyRemoteCerts
	I0625 16:51:44.226804   67510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 16:51:44.226824   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:44.229352   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.229698   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.229724   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.229859   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:44.230041   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.230178   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:44.230301   67510 sshutil.go:53] new ssh client: &{IP:192.168.83.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/id_rsa Username:docker}
	I0625 16:51:44.314170   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 16:51:44.337862   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0625 16:51:44.360773   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0625 16:51:44.385688   67510 provision.go:87] duration metric: took 418.680723ms to configureAuth
	I0625 16:51:44.385704   67510 buildroot.go:189] setting minikube options for container-runtime
	I0625 16:51:44.385848   67510 config.go:182] Loaded profile config "cert-options-742979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:51:44.385978   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:44.388872   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.389215   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.389239   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.389419   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:44.389642   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.389802   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.389961   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:44.390132   67510 main.go:141] libmachine: Using SSH client type: native
	I0625 16:51:44.390294   67510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.83.28 22 <nil> <nil>}
	I0625 16:51:44.390303   67510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 16:51:44.678859   67510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 16:51:44.678874   67510 main.go:141] libmachine: Checking connection to Docker...
	I0625 16:51:44.678879   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetURL
	I0625 16:51:44.680144   67510 main.go:141] libmachine: (cert-options-742979) DBG | Using libvirt version 6000000
	I0625 16:51:44.682378   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.682712   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.682740   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.682845   67510 main.go:141] libmachine: Docker is up and running!
	I0625 16:51:44.682855   67510 main.go:141] libmachine: Reticulating splines...
	I0625 16:51:44.682860   67510 client.go:171] duration metric: took 24.522308355s to LocalClient.Create
	I0625 16:51:44.682878   67510 start.go:167] duration metric: took 24.522359045s to libmachine.API.Create "cert-options-742979"
	I0625 16:51:44.682883   67510 start.go:293] postStartSetup for "cert-options-742979" (driver="kvm2")
	I0625 16:51:44.682891   67510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 16:51:44.682903   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:44.683158   67510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 16:51:44.683180   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:44.685309   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.685654   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.685689   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.685822   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:44.686010   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.686190   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:44.686357   67510 sshutil.go:53] new ssh client: &{IP:192.168.83.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/id_rsa Username:docker}
	I0625 16:51:44.770702   67510 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 16:51:44.775141   67510 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 16:51:44.775155   67510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 16:51:44.775223   67510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 16:51:44.775338   67510 filesync.go:149] local asset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> 212392.pem in /etc/ssl/certs
	I0625 16:51:44.775456   67510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0625 16:51:44.784879   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:51:44.808849   67510 start.go:296] duration metric: took 125.956993ms for postStartSetup
	I0625 16:51:44.808893   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetConfigRaw
	I0625 16:51:44.809468   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetIP
	I0625 16:51:44.812126   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.812455   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.812480   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.812735   67510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/config.json ...
	I0625 16:51:44.812939   67510 start.go:128] duration metric: took 24.669466141s to createHost
	I0625 16:51:44.812954   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:44.815103   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.815398   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.815414   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.815490   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:44.815667   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.815842   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.815987   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:44.816166   67510 main.go:141] libmachine: Using SSH client type: native
	I0625 16:51:44.816313   67510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.83.28 22 <nil> <nil>}
	I0625 16:51:44.816318   67510 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0625 16:51:44.923959   67510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719334304.900827745
	
	I0625 16:51:44.923971   67510 fix.go:216] guest clock: 1719334304.900827745
	I0625 16:51:44.923979   67510 fix.go:229] Guest: 2024-06-25 16:51:44.900827745 +0000 UTC Remote: 2024-06-25 16:51:44.812944397 +0000 UTC m=+24.772173068 (delta=87.883348ms)
	I0625 16:51:44.924026   67510 fix.go:200] guest clock delta is within tolerance: 87.883348ms
	I0625 16:51:44.924031   67510 start.go:83] releasing machines lock for "cert-options-742979", held for 24.780626337s
	I0625 16:51:44.924054   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:44.924325   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetIP
	I0625 16:51:44.927103   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.927468   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.927488   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.927632   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:44.928152   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:44.928324   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:44.928404   67510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 16:51:44.928444   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:44.928541   67510 ssh_runner.go:195] Run: cat /version.json
	I0625 16:51:44.928559   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:44.931726   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.931963   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.932100   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.932132   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.932249   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.932270   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.932320   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:44.932477   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.932492   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:44.932601   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.932642   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:44.932703   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:44.932783   67510 sshutil.go:53] new ssh client: &{IP:192.168.83.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/id_rsa Username:docker}
	I0625 16:51:44.932869   67510 sshutil.go:53] new ssh client: &{IP:192.168.83.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/id_rsa Username:docker}
	I0625 16:51:45.039524   67510 ssh_runner.go:195] Run: systemctl --version
	I0625 16:51:45.046601   67510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 16:51:45.207453   67510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0625 16:51:45.216059   67510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 16:51:45.216113   67510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 16:51:45.238074   67510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0625 16:51:45.238085   67510 start.go:494] detecting cgroup driver to use...
	I0625 16:51:45.238165   67510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 16:51:45.257881   67510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 16:51:45.274535   67510 docker.go:217] disabling cri-docker service (if available) ...
	I0625 16:51:45.274579   67510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 16:51:45.294849   67510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 16:51:45.312759   67510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 16:51:45.439612   67510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 16:51:45.605628   67510 docker.go:233] disabling docker service ...
	I0625 16:51:45.605682   67510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 16:51:45.623610   67510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 16:51:45.637774   67510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 16:51:45.791282   67510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 16:51:45.914503   67510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 16:51:45.929677   67510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 16:51:45.949988   67510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0625 16:51:45.950059   67510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:45.961071   67510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 16:51:45.961110   67510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:45.971851   67510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:45.982435   67510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:45.993177   67510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 16:51:46.004128   67510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:46.014665   67510 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:46.032763   67510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:46.045076   67510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 16:51:46.056565   67510 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0625 16:51:46.056609   67510 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0625 16:51:46.072971   67510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 16:51:46.083694   67510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:51:46.213002   67510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 16:51:46.363181   67510 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 16:51:46.363247   67510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 16:51:46.368170   67510 start.go:562] Will wait 60s for crictl version
	I0625 16:51:46.368214   67510 ssh_runner.go:195] Run: which crictl
	I0625 16:51:46.372091   67510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 16:51:46.416465   67510 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 16:51:46.416626   67510 ssh_runner.go:195] Run: crio --version
	I0625 16:51:46.448357   67510 ssh_runner.go:195] Run: crio --version
	I0625 16:51:46.483901   67510 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0625 16:51:44.200835   66820 pod_ready.go:102] pod "etcd-pause-756277" in "kube-system" namespace has status "Ready":"False"
	I0625 16:51:46.201285   66820 pod_ready.go:102] pod "etcd-pause-756277" in "kube-system" namespace has status "Ready":"False"
	I0625 16:51:44.926581   67969 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0625 16:51:44.926749   67969 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2
	I0625 16:51:44.926793   67969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:51:44.943707   67969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46287
	I0625 16:51:44.944112   67969 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:51:44.944849   67969 main.go:141] libmachine: Using API Version  1
	I0625 16:51:44.944872   67969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:51:44.945231   67969 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:51:44.945429   67969 main.go:141] libmachine: (old-k8s-version-462347) Calling .GetMachineName
	I0625 16:51:44.945565   67969 main.go:141] libmachine: (old-k8s-version-462347) Calling .DriverName
	I0625 16:51:44.945713   67969 start.go:159] libmachine.API.Create for "old-k8s-version-462347" (driver="kvm2")
	I0625 16:51:44.945739   67969 client.go:168] LocalClient.Create starting
	I0625 16:51:44.945776   67969 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem
	I0625 16:51:44.945813   67969 main.go:141] libmachine: Decoding PEM data...
	I0625 16:51:44.945832   67969 main.go:141] libmachine: Parsing certificate...
	I0625 16:51:44.945906   67969 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem
	I0625 16:51:44.945931   67969 main.go:141] libmachine: Decoding PEM data...
	I0625 16:51:44.945952   67969 main.go:141] libmachine: Parsing certificate...
	I0625 16:51:44.945976   67969 main.go:141] libmachine: Running pre-create checks...
	I0625 16:51:44.945995   67969 main.go:141] libmachine: (old-k8s-version-462347) Calling .PreCreateCheck
	I0625 16:51:44.946433   67969 main.go:141] libmachine: (old-k8s-version-462347) Calling .GetConfigRaw
	I0625 16:51:44.946936   67969 main.go:141] libmachine: Creating machine...
	I0625 16:51:44.946955   67969 main.go:141] libmachine: (old-k8s-version-462347) Calling .Create
	I0625 16:51:44.947101   67969 main.go:141] libmachine: (old-k8s-version-462347) Creating KVM machine...
	I0625 16:51:44.948379   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | found existing default KVM network
	I0625 16:51:44.949923   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:44.949766   68009 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0d0}
	I0625 16:51:44.949948   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | created network xml: 
	I0625 16:51:44.949960   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | <network>
	I0625 16:51:44.949971   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |   <name>mk-old-k8s-version-462347</name>
	I0625 16:51:44.949979   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |   <dns enable='no'/>
	I0625 16:51:44.949991   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |   
	I0625 16:51:44.950014   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0625 16:51:44.950029   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |     <dhcp>
	I0625 16:51:44.950040   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0625 16:51:44.950052   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |     </dhcp>
	I0625 16:51:44.950061   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |   </ip>
	I0625 16:51:44.950069   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |   
	I0625 16:51:44.950081   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | </network>
	I0625 16:51:44.950091   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | 
	I0625 16:51:44.955636   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | trying to create private KVM network mk-old-k8s-version-462347 192.168.39.0/24...
	I0625 16:51:45.030929   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | private KVM network mk-old-k8s-version-462347 192.168.39.0/24 created
	I0625 16:51:45.030963   67969 main.go:141] libmachine: (old-k8s-version-462347) Setting up store path in /home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347 ...
	I0625 16:51:45.030979   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:45.030907   68009 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:51:45.031001   67969 main.go:141] libmachine: (old-k8s-version-462347) Building disk image from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso
	I0625 16:51:45.031077   67969 main.go:141] libmachine: (old-k8s-version-462347) Downloading /home/jenkins/minikube-integration/19128-13846/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso...
	I0625 16:51:45.295483   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:45.295366   68009 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347/id_rsa...
	I0625 16:51:45.606488   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:45.606365   68009 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347/old-k8s-version-462347.rawdisk...
	I0625 16:51:45.606517   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Writing magic tar header
	I0625 16:51:45.606530   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Writing SSH key tar header
	I0625 16:51:45.606620   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:45.606558   68009 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347 ...
	I0625 16:51:45.606709   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347
	I0625 16:51:45.606736   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines
	I0625 16:51:45.606755   67969 main.go:141] libmachine: (old-k8s-version-462347) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347 (perms=drwx------)
	I0625 16:51:45.606771   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:51:45.606782   67969 main.go:141] libmachine: (old-k8s-version-462347) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines (perms=drwxr-xr-x)
	I0625 16:51:45.606795   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846
	I0625 16:51:45.606807   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0625 16:51:45.606821   67969 main.go:141] libmachine: (old-k8s-version-462347) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube (perms=drwxr-xr-x)
	I0625 16:51:45.606836   67969 main.go:141] libmachine: (old-k8s-version-462347) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846 (perms=drwxrwxr-x)
	I0625 16:51:45.606849   67969 main.go:141] libmachine: (old-k8s-version-462347) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0625 16:51:45.606864   67969 main.go:141] libmachine: (old-k8s-version-462347) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0625 16:51:45.606872   67969 main.go:141] libmachine: (old-k8s-version-462347) Creating domain...
	I0625 16:51:45.606884   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Checking permissions on dir: /home/jenkins
	I0625 16:51:45.606916   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Checking permissions on dir: /home
	I0625 16:51:45.606928   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Skipping /home - not owner
	I0625 16:51:45.608102   67969 main.go:141] libmachine: (old-k8s-version-462347) define libvirt domain using xml: 
	I0625 16:51:45.608123   67969 main.go:141] libmachine: (old-k8s-version-462347) <domain type='kvm'>
	I0625 16:51:45.608131   67969 main.go:141] libmachine: (old-k8s-version-462347)   <name>old-k8s-version-462347</name>
	I0625 16:51:45.608140   67969 main.go:141] libmachine: (old-k8s-version-462347)   <memory unit='MiB'>2200</memory>
	I0625 16:51:45.608149   67969 main.go:141] libmachine: (old-k8s-version-462347)   <vcpu>2</vcpu>
	I0625 16:51:45.608156   67969 main.go:141] libmachine: (old-k8s-version-462347)   <features>
	I0625 16:51:45.608169   67969 main.go:141] libmachine: (old-k8s-version-462347)     <acpi/>
	I0625 16:51:45.608176   67969 main.go:141] libmachine: (old-k8s-version-462347)     <apic/>
	I0625 16:51:45.608190   67969 main.go:141] libmachine: (old-k8s-version-462347)     <pae/>
	I0625 16:51:45.608197   67969 main.go:141] libmachine: (old-k8s-version-462347)     
	I0625 16:51:45.608207   67969 main.go:141] libmachine: (old-k8s-version-462347)   </features>
	I0625 16:51:45.608214   67969 main.go:141] libmachine: (old-k8s-version-462347)   <cpu mode='host-passthrough'>
	I0625 16:51:45.608219   67969 main.go:141] libmachine: (old-k8s-version-462347)   
	I0625 16:51:45.608226   67969 main.go:141] libmachine: (old-k8s-version-462347)   </cpu>
	I0625 16:51:45.608255   67969 main.go:141] libmachine: (old-k8s-version-462347)   <os>
	I0625 16:51:45.608277   67969 main.go:141] libmachine: (old-k8s-version-462347)     <type>hvm</type>
	I0625 16:51:45.608288   67969 main.go:141] libmachine: (old-k8s-version-462347)     <boot dev='cdrom'/>
	I0625 16:51:45.608297   67969 main.go:141] libmachine: (old-k8s-version-462347)     <boot dev='hd'/>
	I0625 16:51:45.608311   67969 main.go:141] libmachine: (old-k8s-version-462347)     <bootmenu enable='no'/>
	I0625 16:51:45.608322   67969 main.go:141] libmachine: (old-k8s-version-462347)   </os>
	I0625 16:51:45.608333   67969 main.go:141] libmachine: (old-k8s-version-462347)   <devices>
	I0625 16:51:45.608345   67969 main.go:141] libmachine: (old-k8s-version-462347)     <disk type='file' device='cdrom'>
	I0625 16:51:45.608361   67969 main.go:141] libmachine: (old-k8s-version-462347)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347/boot2docker.iso'/>
	I0625 16:51:45.608375   67969 main.go:141] libmachine: (old-k8s-version-462347)       <target dev='hdc' bus='scsi'/>
	I0625 16:51:45.608387   67969 main.go:141] libmachine: (old-k8s-version-462347)       <readonly/>
	I0625 16:51:45.608397   67969 main.go:141] libmachine: (old-k8s-version-462347)     </disk>
	I0625 16:51:45.608408   67969 main.go:141] libmachine: (old-k8s-version-462347)     <disk type='file' device='disk'>
	I0625 16:51:45.608425   67969 main.go:141] libmachine: (old-k8s-version-462347)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0625 16:51:45.608446   67969 main.go:141] libmachine: (old-k8s-version-462347)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347/old-k8s-version-462347.rawdisk'/>
	I0625 16:51:45.608459   67969 main.go:141] libmachine: (old-k8s-version-462347)       <target dev='hda' bus='virtio'/>
	I0625 16:51:45.608471   67969 main.go:141] libmachine: (old-k8s-version-462347)     </disk>
	I0625 16:51:45.608484   67969 main.go:141] libmachine: (old-k8s-version-462347)     <interface type='network'>
	I0625 16:51:45.608497   67969 main.go:141] libmachine: (old-k8s-version-462347)       <source network='mk-old-k8s-version-462347'/>
	I0625 16:51:45.608525   67969 main.go:141] libmachine: (old-k8s-version-462347)       <model type='virtio'/>
	I0625 16:51:45.608562   67969 main.go:141] libmachine: (old-k8s-version-462347)     </interface>
	I0625 16:51:45.608576   67969 main.go:141] libmachine: (old-k8s-version-462347)     <interface type='network'>
	I0625 16:51:45.608587   67969 main.go:141] libmachine: (old-k8s-version-462347)       <source network='default'/>
	I0625 16:51:45.608597   67969 main.go:141] libmachine: (old-k8s-version-462347)       <model type='virtio'/>
	I0625 16:51:45.608605   67969 main.go:141] libmachine: (old-k8s-version-462347)     </interface>
	I0625 16:51:45.608610   67969 main.go:141] libmachine: (old-k8s-version-462347)     <serial type='pty'>
	I0625 16:51:45.608622   67969 main.go:141] libmachine: (old-k8s-version-462347)       <target port='0'/>
	I0625 16:51:45.608633   67969 main.go:141] libmachine: (old-k8s-version-462347)     </serial>
	I0625 16:51:45.608641   67969 main.go:141] libmachine: (old-k8s-version-462347)     <console type='pty'>
	I0625 16:51:45.608654   67969 main.go:141] libmachine: (old-k8s-version-462347)       <target type='serial' port='0'/>
	I0625 16:51:45.608665   67969 main.go:141] libmachine: (old-k8s-version-462347)     </console>
	I0625 16:51:45.608677   67969 main.go:141] libmachine: (old-k8s-version-462347)     <rng model='virtio'>
	I0625 16:51:45.608689   67969 main.go:141] libmachine: (old-k8s-version-462347)       <backend model='random'>/dev/random</backend>
	I0625 16:51:45.608707   67969 main.go:141] libmachine: (old-k8s-version-462347)     </rng>
	I0625 16:51:45.608715   67969 main.go:141] libmachine: (old-k8s-version-462347)     
	I0625 16:51:45.608727   67969 main.go:141] libmachine: (old-k8s-version-462347)     
	I0625 16:51:45.608739   67969 main.go:141] libmachine: (old-k8s-version-462347)   </devices>
	I0625 16:51:45.608754   67969 main.go:141] libmachine: (old-k8s-version-462347) </domain>
	I0625 16:51:45.608767   67969 main.go:141] libmachine: (old-k8s-version-462347) 
	I0625 16:51:45.612847   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:01:69:60 in network default
	I0625 16:51:45.613504   67969 main.go:141] libmachine: (old-k8s-version-462347) Ensuring networks are active...
	I0625 16:51:45.613527   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:45.614278   67969 main.go:141] libmachine: (old-k8s-version-462347) Ensuring network default is active
	I0625 16:51:45.614865   67969 main.go:141] libmachine: (old-k8s-version-462347) Ensuring network mk-old-k8s-version-462347 is active
	I0625 16:51:45.615553   67969 main.go:141] libmachine: (old-k8s-version-462347) Getting domain xml...
	I0625 16:51:45.616178   67969 main.go:141] libmachine: (old-k8s-version-462347) Creating domain...
	I0625 16:51:46.957433   67969 main.go:141] libmachine: (old-k8s-version-462347) Waiting to get IP...
	I0625 16:51:46.958505   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:46.959067   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:46.959104   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:46.959055   68009 retry.go:31] will retry after 223.641081ms: waiting for machine to come up
	I0625 16:51:47.184757   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:47.185427   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:47.185454   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:47.185349   68009 retry.go:31] will retry after 246.556335ms: waiting for machine to come up
	I0625 16:51:47.433988   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:47.434905   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:47.434935   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:47.434851   68009 retry.go:31] will retry after 303.860912ms: waiting for machine to come up
	I0625 16:51:47.740500   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:47.741087   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:47.741112   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:47.741041   68009 retry.go:31] will retry after 411.392596ms: waiting for machine to come up
	I0625 16:51:48.153766   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:48.154313   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:48.154336   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:48.154223   68009 retry.go:31] will retry after 691.010311ms: waiting for machine to come up
	I0625 16:51:46.485144   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetIP
	I0625 16:51:46.488663   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:46.489126   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:46.489143   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:46.489301   67510 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0625 16:51:46.493495   67510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 16:51:46.506558   67510 kubeadm.go:877] updating cluster {Name:cert-options-742979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.2 ClusterName:cert-options-742979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.28 Port:8555 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0625 16:51:46.506669   67510 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 16:51:46.506721   67510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:51:46.540006   67510 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0625 16:51:46.540055   67510 ssh_runner.go:195] Run: which lz4
	I0625 16:51:46.544239   67510 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0625 16:51:46.548466   67510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0625 16:51:46.548482   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0625 16:51:47.993202   67510 crio.go:462] duration metric: took 1.448983606s to copy over tarball
	I0625 16:51:47.993286   67510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0625 16:51:50.283910   67510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.290597092s)
	I0625 16:51:50.283927   67510 crio.go:469] duration metric: took 2.290707531s to extract the tarball
	I0625 16:51:50.283934   67510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0625 16:51:50.324623   67510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:51:50.373947   67510 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 16:51:50.373960   67510 cache_images.go:84] Images are preloaded, skipping loading
	I0625 16:51:50.373968   67510 kubeadm.go:928] updating node { 192.168.83.28 8555 v1.30.2 crio true true} ...
	I0625 16:51:50.374107   67510 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-options-742979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:cert-options-742979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 16:51:50.374190   67510 ssh_runner.go:195] Run: crio config
	I0625 16:51:50.423983   67510 cni.go:84] Creating CNI manager for ""
	I0625 16:51:50.423991   67510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 16:51:50.423998   67510 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0625 16:51:50.424015   67510 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.28 APIServerPort:8555 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-742979 NodeName:cert-options-742979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0625 16:51:50.424147   67510 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.28
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-options-742979"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0625 16:51:50.424200   67510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0625 16:51:50.434569   67510 binaries.go:44] Found k8s binaries, skipping transfer
	I0625 16:51:50.434642   67510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0625 16:51:50.445427   67510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0625 16:51:50.466026   67510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 16:51:50.486011   67510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0625 16:51:50.503951   67510 ssh_runner.go:195] Run: grep 192.168.83.28	control-plane.minikube.internal$ /etc/hosts
	I0625 16:51:50.507808   67510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 16:51:50.519289   67510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:51:50.652497   67510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 16:51:50.669729   67510 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979 for IP: 192.168.83.28
	I0625 16:51:50.669740   67510 certs.go:194] generating shared ca certs ...
	I0625 16:51:50.669756   67510 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:50.669933   67510 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 16:51:50.669978   67510 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 16:51:50.669986   67510 certs.go:256] generating profile certs ...
	I0625 16:51:50.670068   67510 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/client.key
	I0625 16:51:50.670080   67510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/client.crt with IP's: []
	I0625 16:51:50.869978   67510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/client.crt ...
	I0625 16:51:50.870000   67510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/client.crt: {Name:mkff20436270ca2b0a91285af2158411579c5fff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:50.870224   67510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/client.key ...
	I0625 16:51:50.870236   67510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/client.key: {Name:mk4382263218940c501d9977a4739d1f6618207e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:50.870340   67510 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.key.4a02eec9
	I0625 16:51:50.870356   67510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.crt.4a02eec9 with IP's: [127.0.0.1 192.168.15.15 10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.28]
	I0625 16:51:50.987125   67510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.crt.4a02eec9 ...
	I0625 16:51:50.987140   67510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.crt.4a02eec9: {Name:mk70bc77e5b289b08c519ae4adff17614dc76fb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:50.987289   67510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.key.4a02eec9 ...
	I0625 16:51:50.987297   67510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.key.4a02eec9: {Name:mkb5316758f637fc71996b7be8a9ac21e7bdd7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:50.987363   67510 certs.go:381] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.crt.4a02eec9 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.crt
	I0625 16:51:50.987443   67510 certs.go:385] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.key.4a02eec9 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.key
	I0625 16:51:50.987492   67510 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.key
	I0625 16:51:50.987502   67510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.crt with IP's: []
	I0625 16:51:51.253108   67510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.crt ...
	I0625 16:51:51.253122   67510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.crt: {Name:mk19b8724de0676ee7a75414d1c2a59129996edc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:51.253306   67510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.key ...
	I0625 16:51:51.253314   67510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.key: {Name:mkeaaf451e75efcdede17d3530fbcd76f62f634c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:51.253481   67510 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem (1338 bytes)
	W0625 16:51:51.253510   67510 certs.go:480] ignoring /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239_empty.pem, impossibly tiny 0 bytes
	I0625 16:51:51.253521   67510 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 16:51:51.253540   67510 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 16:51:51.253558   67510 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 16:51:51.253575   67510 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 16:51:51.253604   67510 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:51:51.254205   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 16:51:51.286296   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 16:51:51.319132   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 16:51:51.346415   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 16:51:51.371116   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1480 bytes)
	I0625 16:51:51.397075   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0625 16:51:51.422408   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 16:51:51.446626   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0625 16:51:51.473112   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 16:51:51.504432   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem --> /usr/share/ca-certificates/21239.pem (1338 bytes)
	I0625 16:51:51.546504   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /usr/share/ca-certificates/212392.pem (1708 bytes)
	I0625 16:51:51.577124   67510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0625 16:51:51.595227   67510 ssh_runner.go:195] Run: openssl version
	I0625 16:51:51.601077   67510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 16:51:51.612617   67510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:51:51.617496   67510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:51:51.617553   67510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:51:51.624030   67510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 16:51:51.635890   67510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21239.pem && ln -fs /usr/share/ca-certificates/21239.pem /etc/ssl/certs/21239.pem"
	I0625 16:51:51.646824   67510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21239.pem
	I0625 16:51:51.651368   67510 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 16:51:51.651403   67510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21239.pem
	I0625 16:51:51.656861   67510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21239.pem /etc/ssl/certs/51391683.0"
	I0625 16:51:51.667523   67510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212392.pem && ln -fs /usr/share/ca-certificates/212392.pem /etc/ssl/certs/212392.pem"
	I0625 16:51:51.678684   67510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212392.pem
	I0625 16:51:51.683363   67510 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 16:51:51.683403   67510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212392.pem
	I0625 16:51:51.689255   67510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/212392.pem /etc/ssl/certs/3ec20f2e.0"
	I0625 16:51:51.701410   67510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 16:51:51.705561   67510 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0625 16:51:51.705604   67510 kubeadm.go:391] StartCluster: {Name:cert-options-742979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:cert-options-742979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.28 Port:8555 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:51:51.705682   67510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0625 16:51:51.705742   67510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0625 16:51:51.757540   67510 cri.go:89] found id: ""
	I0625 16:51:51.757611   67510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0625 16:51:51.769400   67510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0625 16:51:51.780397   67510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0625 16:51:51.790663   67510 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0625 16:51:51.790672   67510 kubeadm.go:156] found existing configuration files:
	
	I0625 16:51:51.790715   67510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf
	I0625 16:51:51.800156   67510 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0625 16:51:51.800203   67510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0625 16:51:51.809987   67510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf
	I0625 16:51:51.821114   67510 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0625 16:51:51.821163   67510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0625 16:51:51.832611   67510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf
	I0625 16:51:51.843420   67510 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0625 16:51:51.843461   67510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0625 16:51:51.855025   67510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf
	I0625 16:51:51.866223   67510 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0625 16:51:51.866256   67510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0625 16:51:51.876893   67510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0625 16:51:52.012810   67510 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0625 16:51:52.013123   67510 kubeadm.go:309] [preflight] Running pre-flight checks
	I0625 16:51:52.146822   67510 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0625 16:51:52.146962   67510 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0625 16:51:52.147083   67510 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0625 16:51:52.403860   67510 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0625 16:51:48.202332   66820 pod_ready.go:102] pod "etcd-pause-756277" in "kube-system" namespace has status "Ready":"False"
	I0625 16:51:50.702553   66820 pod_ready.go:102] pod "etcd-pause-756277" in "kube-system" namespace has status "Ready":"False"
	I0625 16:51:51.201552   66820 pod_ready.go:92] pod "etcd-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:51.201575   66820 pod_ready.go:81] duration metric: took 9.006993828s for pod "etcd-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:51.201585   66820 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:51.207908   66820 pod_ready.go:92] pod "kube-apiserver-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:51.207929   66820 pod_ready.go:81] duration metric: took 6.337727ms for pod "kube-apiserver-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:51.207942   66820 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:51.715386   66820 pod_ready.go:92] pod "kube-controller-manager-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:51.715411   66820 pod_ready.go:81] duration metric: took 507.461084ms for pod "kube-controller-manager-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:51.715428   66820 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k2flf" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:51.722419   66820 pod_ready.go:92] pod "kube-proxy-k2flf" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:51.722439   66820 pod_ready.go:81] duration metric: took 7.003281ms for pod "kube-proxy-k2flf" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:51.722451   66820 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:48.848078   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:48.848727   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:48.848753   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:48.848679   68009 retry.go:31] will retry after 615.70938ms: waiting for machine to come up
	I0625 16:51:49.466514   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:49.467030   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:49.467082   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:49.466976   68009 retry.go:31] will retry after 1.098402085s: waiting for machine to come up
	I0625 16:51:50.566833   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:50.567412   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:50.567439   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:50.567364   68009 retry.go:31] will retry after 1.338001197s: waiting for machine to come up
	I0625 16:51:51.906989   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:51.907694   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:51.907722   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:51.907642   68009 retry.go:31] will retry after 1.695207109s: waiting for machine to come up
	I0625 16:51:52.532225   67510 out.go:204]   - Generating certificates and keys ...
	I0625 16:51:52.532378   67510 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0625 16:51:52.532501   67510 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0625 16:51:52.613355   67510 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0625 16:51:52.680508   67510 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0625 16:51:52.983167   67510 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0625 16:51:53.163535   67510 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0625 16:51:53.391112   67510 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0625 16:51:53.391258   67510 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [cert-options-742979 localhost] and IPs [192.168.83.28 127.0.0.1 ::1]
	I0625 16:51:53.563704   67510 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0625 16:51:53.563842   67510 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [cert-options-742979 localhost] and IPs [192.168.83.28 127.0.0.1 ::1]
	I0625 16:51:53.780439   67510 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0625 16:51:53.884239   67510 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0625 16:51:53.992173   67510 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0625 16:51:53.992278   67510 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0625 16:51:54.285955   67510 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0625 16:51:54.698504   67510 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0625 16:51:54.824516   67510 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0625 16:51:55.130931   67510 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0625 16:51:55.273440   67510 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0625 16:51:55.274408   67510 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0625 16:51:55.278236   67510 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0625 16:51:53.827154   66820 pod_ready.go:102] pod "kube-scheduler-pause-756277" in "kube-system" namespace has status "Ready":"False"
	I0625 16:51:55.729273   66820 pod_ready.go:92] pod "kube-scheduler-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:55.729299   66820 pod_ready.go:81] duration metric: took 4.006838878s for pod "kube-scheduler-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:55.729309   66820 pod_ready.go:38] duration metric: took 13.546558495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 16:51:55.729328   66820 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0625 16:51:55.744967   66820 ops.go:34] apiserver oom_adj: -16
	I0625 16:51:55.744987   66820 kubeadm.go:591] duration metric: took 40.644428463s to restartPrimaryControlPlane
	I0625 16:51:55.744998   66820 kubeadm.go:393] duration metric: took 41.036637358s to StartCluster
	I0625 16:51:55.745020   66820 settings.go:142] acquiring lock: {Name:mk38d7db80b40da56857d65b8e7da05700cdb9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:55.745098   66820 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 16:51:55.746434   66820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/kubeconfig: {Name:mk71a37176bd7deadd1f1cd3c756fe56f3b0810d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:55.746739   66820 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 16:51:55.746958   66820 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0625 16:51:55.747056   66820 config.go:182] Loaded profile config "pause-756277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:51:55.748448   66820 out.go:177] * Verifying Kubernetes components...
	I0625 16:51:55.749331   66820 out.go:177] * Enabled addons: 
	I0625 16:51:55.750150   66820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:51:55.750890   66820 addons.go:510] duration metric: took 3.934413ms for enable addons: enabled=[]
	I0625 16:51:55.925860   66820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 16:51:55.945014   66820 node_ready.go:35] waiting up to 6m0s for node "pause-756277" to be "Ready" ...
	I0625 16:51:55.948792   66820 node_ready.go:49] node "pause-756277" has status "Ready":"True"
	I0625 16:51:55.948819   66820 node_ready.go:38] duration metric: took 3.766818ms for node "pause-756277" to be "Ready" ...
	I0625 16:51:55.948831   66820 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 16:51:55.959007   66820 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jsf7r" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:55.965535   66820 pod_ready.go:92] pod "coredns-7db6d8ff4d-jsf7r" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:55.965558   66820 pod_ready.go:81] duration metric: took 6.519765ms for pod "coredns-7db6d8ff4d-jsf7r" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:55.965569   66820 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:55.999867   66820 pod_ready.go:92] pod "etcd-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:55.999887   66820 pod_ready.go:81] duration metric: took 34.312113ms for pod "etcd-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:55.999897   66820 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:56.399680   66820 pod_ready.go:92] pod "kube-apiserver-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:56.399712   66820 pod_ready.go:81] duration metric: took 399.807529ms for pod "kube-apiserver-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:56.399726   66820 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:56.801024   66820 pod_ready.go:92] pod "kube-controller-manager-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:56.801046   66820 pod_ready.go:81] duration metric: took 401.311475ms for pod "kube-controller-manager-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:56.801055   66820 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k2flf" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:57.200381   66820 pod_ready.go:92] pod "kube-proxy-k2flf" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:57.200409   66820 pod_ready.go:81] duration metric: took 399.346662ms for pod "kube-proxy-k2flf" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:57.200421   66820 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:53.605486   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:53.605994   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:53.606021   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:53.605938   68009 retry.go:31] will retry after 1.870496428s: waiting for machine to come up
	I0625 16:51:55.477847   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:55.478354   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:55.478384   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:55.478308   68009 retry.go:31] will retry after 1.914303586s: waiting for machine to come up
	I0625 16:51:57.394848   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:57.395374   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:57.395405   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:57.395336   68009 retry.go:31] will retry after 2.696563668s: waiting for machine to come up
	I0625 16:51:57.599748   66820 pod_ready.go:92] pod "kube-scheduler-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:57.599778   66820 pod_ready.go:81] duration metric: took 399.348589ms for pod "kube-scheduler-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:57.599788   66820 pod_ready.go:38] duration metric: took 1.650945213s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 16:51:57.599806   66820 api_server.go:52] waiting for apiserver process to appear ...
	I0625 16:51:57.599866   66820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:51:57.623304   66820 api_server.go:72] duration metric: took 1.876524409s to wait for apiserver process to appear ...
	I0625 16:51:57.623334   66820 api_server.go:88] waiting for apiserver healthz status ...
	I0625 16:51:57.623363   66820 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I0625 16:51:57.634972   66820 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I0625 16:51:57.637240   66820 api_server.go:141] control plane version: v1.30.2
	I0625 16:51:57.637265   66820 api_server.go:131] duration metric: took 13.922241ms to wait for apiserver health ...
	I0625 16:51:57.637275   66820 system_pods.go:43] waiting for kube-system pods to appear ...
	I0625 16:51:57.803247   66820 system_pods.go:59] 6 kube-system pods found
	I0625 16:51:57.803281   66820 system_pods.go:61] "coredns-7db6d8ff4d-jsf7r" [8ddacba2-d039-40c7-8731-ba8e5707cfda] Running
	I0625 16:51:57.803288   66820 system_pods.go:61] "etcd-pause-756277" [9b009204-4f10-4d01-9cc3-601cc13fcdbc] Running
	I0625 16:51:57.803294   66820 system_pods.go:61] "kube-apiserver-pause-756277" [384ff579-a83e-4186-9e58-5486ddbfc394] Running
	I0625 16:51:57.803300   66820 system_pods.go:61] "kube-controller-manager-pause-756277" [ac3c7fed-f5ca-4a8b-ae35-8e9c77f41153] Running
	I0625 16:51:57.803306   66820 system_pods.go:61] "kube-proxy-k2flf" [dc85c133-117a-4389-9f53-32d82b3e40ce] Running
	I0625 16:51:57.803312   66820 system_pods.go:61] "kube-scheduler-pause-756277" [39879154-54fd-4458-a274-228563ba7f39] Running
	I0625 16:51:57.803320   66820 system_pods.go:74] duration metric: took 166.02258ms to wait for pod list to return data ...
	I0625 16:51:57.803336   66820 default_sa.go:34] waiting for default service account to be created ...
	I0625 16:51:57.999810   66820 default_sa.go:45] found service account: "default"
	I0625 16:51:57.999837   66820 default_sa.go:55] duration metric: took 196.493717ms for default service account to be created ...
	I0625 16:51:57.999847   66820 system_pods.go:116] waiting for k8s-apps to be running ...
	I0625 16:51:58.201729   66820 system_pods.go:86] 6 kube-system pods found
	I0625 16:51:58.201759   66820 system_pods.go:89] "coredns-7db6d8ff4d-jsf7r" [8ddacba2-d039-40c7-8731-ba8e5707cfda] Running
	I0625 16:51:58.201764   66820 system_pods.go:89] "etcd-pause-756277" [9b009204-4f10-4d01-9cc3-601cc13fcdbc] Running
	I0625 16:51:58.201768   66820 system_pods.go:89] "kube-apiserver-pause-756277" [384ff579-a83e-4186-9e58-5486ddbfc394] Running
	I0625 16:51:58.201774   66820 system_pods.go:89] "kube-controller-manager-pause-756277" [ac3c7fed-f5ca-4a8b-ae35-8e9c77f41153] Running
	I0625 16:51:58.201780   66820 system_pods.go:89] "kube-proxy-k2flf" [dc85c133-117a-4389-9f53-32d82b3e40ce] Running
	I0625 16:51:58.201784   66820 system_pods.go:89] "kube-scheduler-pause-756277" [39879154-54fd-4458-a274-228563ba7f39] Running
	I0625 16:51:58.201790   66820 system_pods.go:126] duration metric: took 201.937614ms to wait for k8s-apps to be running ...
	I0625 16:51:58.201797   66820 system_svc.go:44] waiting for kubelet service to be running ....
	I0625 16:51:58.201838   66820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:51:58.217980   66820 system_svc.go:56] duration metric: took 16.174027ms WaitForService to wait for kubelet
	I0625 16:51:58.218008   66820 kubeadm.go:576] duration metric: took 2.471232731s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 16:51:58.218032   66820 node_conditions.go:102] verifying NodePressure condition ...
	I0625 16:51:58.400089   66820 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0625 16:51:58.400111   66820 node_conditions.go:123] node cpu capacity is 2
	I0625 16:51:58.400122   66820 node_conditions.go:105] duration metric: took 182.084463ms to run NodePressure ...
	I0625 16:51:58.400132   66820 start.go:240] waiting for startup goroutines ...
	I0625 16:51:58.400139   66820 start.go:245] waiting for cluster config update ...
	I0625 16:51:58.400146   66820 start.go:254] writing updated cluster config ...
	I0625 16:51:58.400413   66820 ssh_runner.go:195] Run: rm -f paused
	I0625 16:51:58.449800   66820 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0625 16:51:58.451685   66820 out.go:177] * Done! kubectl is now configured to use "pause-756277" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.097349550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719334319097324104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7070e94a-1d8f-4943-ae60-0d10becf84bf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.097808729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a02ac48b-2abe-480f-8f25-3461ad65c217 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.097865580Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a02ac48b-2abe-480f-8f25-3461ad65c217 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.098675885Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1740598d5cf3e4ded267f89d2b1ce627811652faa5611ed1c54811aac4799b56,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719334297791062960,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c3648d140888f573d7d76f44a4a9b301678446e57e7de9d648a15ef0e6477,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719334297789493345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815bfb4144a979c0d75febf219054e44f58be9cef61ae0e89f475aacac6d1797,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719334297769722348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ee0a5a949a621f4b6dd6e3cddbb8da84c18c31f77678f5817f92c691f8c04a,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719334297762963682,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c64330d1f127ae90ecb45dc9fabcf2d04ee5c8ae6a8b906780deed57a4be43,PodSandboxId:51a664b19a7e20f1a04d601db6be55a6177b9becfe69785a6d151549a4dd066e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334275418825022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0822288d9ccfea7e829bb4c8c1ccbf4837614f5f2882d194b04550215bcf0d5,PodSandboxId:e6eca69bb88511a7765e98f9a0e1f9c8fb738497f34fa78aa51c9076d83fc375,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719334274576543392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.
kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d61afd2cc4c8bd0458bb500b4ed4a32ae4210ac11f14960978760413d53aae9,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719334274476386447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc4673099e5d0c171adb781d9b2890366b76aebd88506fdcd169c982796c793,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719334274489565438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719334274422406316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719334274299312900,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4711285f965e8c05454daca7fcdcc495b4cdb478f1da0464bbf229ee779c5f2a,PodSandboxId:f0857f728320f8fae5c8573045909064552e84eb979cb14a6161cde4254448d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334187028383448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6fba0c0a9f02a1736519655e7546883e15c7aad2270f2c098353e2a7a73987,PodSandboxId:c0db4dbb08fbc0e070a59277c742bfda588f8402f9632600d8a72e1ffecabb90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719334186354943404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a02ac48b-2abe-480f-8f25-3461ad65c217 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.142382904Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f862513-a361-40e9-af09-203c9e7437ca name=/runtime.v1.RuntimeService/Version
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.142476173Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f862513-a361-40e9-af09-203c9e7437ca name=/runtime.v1.RuntimeService/Version
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.143892474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2402a69-bfba-4b1a-88de-db500752bb22 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.144550443Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719334319144524629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2402a69-bfba-4b1a-88de-db500752bb22 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.144979189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13225712-1582-4d14-bb47-244f6e80122a name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.145035935Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13225712-1582-4d14-bb47-244f6e80122a name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.145347737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1740598d5cf3e4ded267f89d2b1ce627811652faa5611ed1c54811aac4799b56,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719334297791062960,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c3648d140888f573d7d76f44a4a9b301678446e57e7de9d648a15ef0e6477,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719334297789493345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815bfb4144a979c0d75febf219054e44f58be9cef61ae0e89f475aacac6d1797,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719334297769722348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ee0a5a949a621f4b6dd6e3cddbb8da84c18c31f77678f5817f92c691f8c04a,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719334297762963682,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c64330d1f127ae90ecb45dc9fabcf2d04ee5c8ae6a8b906780deed57a4be43,PodSandboxId:51a664b19a7e20f1a04d601db6be55a6177b9becfe69785a6d151549a4dd066e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334275418825022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0822288d9ccfea7e829bb4c8c1ccbf4837614f5f2882d194b04550215bcf0d5,PodSandboxId:e6eca69bb88511a7765e98f9a0e1f9c8fb738497f34fa78aa51c9076d83fc375,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719334274576543392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.
kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d61afd2cc4c8bd0458bb500b4ed4a32ae4210ac11f14960978760413d53aae9,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719334274476386447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc4673099e5d0c171adb781d9b2890366b76aebd88506fdcd169c982796c793,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719334274489565438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719334274422406316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719334274299312900,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4711285f965e8c05454daca7fcdcc495b4cdb478f1da0464bbf229ee779c5f2a,PodSandboxId:f0857f728320f8fae5c8573045909064552e84eb979cb14a6161cde4254448d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334187028383448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6fba0c0a9f02a1736519655e7546883e15c7aad2270f2c098353e2a7a73987,PodSandboxId:c0db4dbb08fbc0e070a59277c742bfda588f8402f9632600d8a72e1ffecabb90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719334186354943404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13225712-1582-4d14-bb47-244f6e80122a name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.190702775Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=541ca979-8ed5-491d-b339-5c14a198e478 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.190820908Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=541ca979-8ed5-491d-b339-5c14a198e478 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.191856968Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6ff71ad-18d5-4cea-8f53-1bc70a09fd7b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.192287016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719334319192265036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6ff71ad-18d5-4cea-8f53-1bc70a09fd7b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.192882374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=756c3f8e-7113-443e-8b13-ea6535ceebd6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.192944068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=756c3f8e-7113-443e-8b13-ea6535ceebd6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.193262358Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1740598d5cf3e4ded267f89d2b1ce627811652faa5611ed1c54811aac4799b56,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719334297791062960,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c3648d140888f573d7d76f44a4a9b301678446e57e7de9d648a15ef0e6477,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719334297789493345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815bfb4144a979c0d75febf219054e44f58be9cef61ae0e89f475aacac6d1797,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719334297769722348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ee0a5a949a621f4b6dd6e3cddbb8da84c18c31f77678f5817f92c691f8c04a,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719334297762963682,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c64330d1f127ae90ecb45dc9fabcf2d04ee5c8ae6a8b906780deed57a4be43,PodSandboxId:51a664b19a7e20f1a04d601db6be55a6177b9becfe69785a6d151549a4dd066e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334275418825022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0822288d9ccfea7e829bb4c8c1ccbf4837614f5f2882d194b04550215bcf0d5,PodSandboxId:e6eca69bb88511a7765e98f9a0e1f9c8fb738497f34fa78aa51c9076d83fc375,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719334274576543392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.
kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d61afd2cc4c8bd0458bb500b4ed4a32ae4210ac11f14960978760413d53aae9,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719334274476386447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc4673099e5d0c171adb781d9b2890366b76aebd88506fdcd169c982796c793,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719334274489565438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719334274422406316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719334274299312900,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4711285f965e8c05454daca7fcdcc495b4cdb478f1da0464bbf229ee779c5f2a,PodSandboxId:f0857f728320f8fae5c8573045909064552e84eb979cb14a6161cde4254448d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334187028383448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6fba0c0a9f02a1736519655e7546883e15c7aad2270f2c098353e2a7a73987,PodSandboxId:c0db4dbb08fbc0e070a59277c742bfda588f8402f9632600d8a72e1ffecabb90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719334186354943404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=756c3f8e-7113-443e-8b13-ea6535ceebd6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.241995361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06a4124c-fe91-41d5-811c-70aeb9e8178e name=/runtime.v1.RuntimeService/Version
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.242082028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06a4124c-fe91-41d5-811c-70aeb9e8178e name=/runtime.v1.RuntimeService/Version
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.249252034Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ef22590-4d7d-49c6-8e1a-ea136c04e650 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.249626033Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719334319249604280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ef22590-4d7d-49c6-8e1a-ea136c04e650 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.250396885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0117a682-1405-4871-bfcc-b9adebd76ef8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.250451366Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0117a682-1405-4871-bfcc-b9adebd76ef8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:51:59 pause-756277 crio[2478]: time="2024-06-25 16:51:59.250809720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1740598d5cf3e4ded267f89d2b1ce627811652faa5611ed1c54811aac4799b56,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719334297791062960,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c3648d140888f573d7d76f44a4a9b301678446e57e7de9d648a15ef0e6477,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719334297789493345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815bfb4144a979c0d75febf219054e44f58be9cef61ae0e89f475aacac6d1797,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719334297769722348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ee0a5a949a621f4b6dd6e3cddbb8da84c18c31f77678f5817f92c691f8c04a,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719334297762963682,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c64330d1f127ae90ecb45dc9fabcf2d04ee5c8ae6a8b906780deed57a4be43,PodSandboxId:51a664b19a7e20f1a04d601db6be55a6177b9becfe69785a6d151549a4dd066e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334275418825022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0822288d9ccfea7e829bb4c8c1ccbf4837614f5f2882d194b04550215bcf0d5,PodSandboxId:e6eca69bb88511a7765e98f9a0e1f9c8fb738497f34fa78aa51c9076d83fc375,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719334274576543392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.
kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d61afd2cc4c8bd0458bb500b4ed4a32ae4210ac11f14960978760413d53aae9,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719334274476386447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc4673099e5d0c171adb781d9b2890366b76aebd88506fdcd169c982796c793,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719334274489565438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719334274422406316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719334274299312900,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4711285f965e8c05454daca7fcdcc495b4cdb478f1da0464bbf229ee779c5f2a,PodSandboxId:f0857f728320f8fae5c8573045909064552e84eb979cb14a6161cde4254448d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334187028383448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6fba0c0a9f02a1736519655e7546883e15c7aad2270f2c098353e2a7a73987,PodSandboxId:c0db4dbb08fbc0e070a59277c742bfda588f8402f9632600d8a72e1ffecabb90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719334186354943404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0117a682-1405-4871-bfcc-b9adebd76ef8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1740598d5cf3e       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   21 seconds ago      Running             kube-scheduler            2                   75ba72120d994       kube-scheduler-pause-756277
	a25c3648d1408       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   21 seconds ago      Running             kube-controller-manager   2                   88987c63b1143       kube-controller-manager-pause-756277
	815bfb4144a97       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   21 seconds ago      Running             kube-apiserver            2                   27bccef8ab2f4       kube-apiserver-pause-756277
	a2ee0a5a949a6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago      Running             etcd                      2                   c1ab1bd9d99dd       etcd-pause-756277
	35c64330d1f12       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   43 seconds ago      Running             coredns                   1                   51a664b19a7e2       coredns-7db6d8ff4d-jsf7r
	f0822288d9ccf       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   44 seconds ago      Running             kube-proxy                1                   e6eca69bb8851       kube-proxy-k2flf
	afc4673099e5d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   44 seconds ago      Exited              kube-controller-manager   1                   88987c63b1143       kube-controller-manager-pause-756277
	5d61afd2cc4c8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   44 seconds ago      Exited              etcd                      1                   c1ab1bd9d99dd       etcd-pause-756277
	edec890ad5331       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   44 seconds ago      Exited              kube-scheduler            1                   75ba72120d994       kube-scheduler-pause-756277
	a43a1e8fd4e02       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   45 seconds ago      Exited              kube-apiserver            1                   27bccef8ab2f4       kube-apiserver-pause-756277
	4711285f965e8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 minutes ago       Exited              coredns                   0                   f0857f728320f       coredns-7db6d8ff4d-jsf7r
	7f6fba0c0a9f0       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   2 minutes ago       Exited              kube-proxy                0                   c0db4dbb08fbc       kube-proxy-k2flf
	
	
	==> coredns [35c64330d1f127ae90ecb45dc9fabcf2d04ee5c8ae6a8b906780deed57a4be43] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59343 - 47997 "HINFO IN 3079080349174427710.3312084374361514834. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021163949s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[876184842]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:51:15.765) (total time: 10005ms):
	Trace[876184842]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10004ms (16:51:25.770)
	Trace[876184842]: [10.005096053s] [10.005096053s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[880905023]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:51:15.769) (total time: 10001ms):
	Trace[880905023]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:51:25.771)
	Trace[880905023]: [10.001544802s] [10.001544802s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1493421118]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:51:15.769) (total time: 10001ms):
	Trace[1493421118]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:51:25.771)
	Trace[1493421118]: [10.001726096s] [10.001726096s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:46510->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:46510->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:46512->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:46512->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:46494->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:46494->10.96.0.1:443: read: connection reset by peer
	
	
	==> coredns [4711285f965e8c05454daca7fcdcc495b4cdb478f1da0464bbf229ee779c5f2a] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1480068827]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:49:47.488) (total time: 30000ms):
	Trace[1480068827]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (16:50:17.489)
	Trace[1480068827]: [30.000846366s] [30.000846366s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1828440197]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:49:47.487) (total time: 30002ms):
	Trace[1828440197]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (16:50:17.488)
	Trace[1828440197]: [30.002450742s] [30.002450742s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1960613028]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:49:47.489) (total time: 30001ms):
	Trace[1960613028]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (16:50:17.489)
	Trace[1960613028]: [30.00104752s] [30.00104752s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37865 - 39199 "HINFO IN 1957842443361674784.2437041974684276081. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022665892s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-756277
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-756277
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=pause-756277
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_25T16_49_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 16:49:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-756277
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:51:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 16:51:40 +0000   Tue, 25 Jun 2024 16:49:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 16:51:40 +0000   Tue, 25 Jun 2024 16:49:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 16:51:40 +0000   Tue, 25 Jun 2024 16:49:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 16:51:40 +0000   Tue, 25 Jun 2024 16:49:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.163
	  Hostname:    pause-756277
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 eeb1476641764122aa8042096faae27a
	  System UUID:                eeb14766-4176-4122-aa80-42096faae27a
	  Boot ID:                    d9ea9705-1b7e-4cc2-ac62-f45f33132579
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jsf7r                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m14s
	  kube-system                 etcd-pause-756277                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-apiserver-pause-756277             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-controller-manager-pause-756277    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-proxy-k2flf                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 kube-scheduler-pause-756277             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m12s              kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     2m27s              kubelet          Node pause-756277 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m27s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m27s              kubelet          Node pause-756277 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s              kubelet          Node pause-756277 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m27s              kubelet          Starting kubelet.
	  Normal  NodeReady                2m26s              kubelet          Node pause-756277 status is now: NodeReady
	  Normal  RegisteredNode           2m14s              node-controller  Node pause-756277 event: Registered Node pause-756277 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-756277 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-756277 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-756277 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                 node-controller  Node pause-756277 event: Registered Node pause-756277 in Controller
	
	
	==> dmesg <==
	[  +0.084322] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.199298] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.146435] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.316284] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.802188] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.074949] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.619908] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.611416] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.489347] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.115730] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.941007] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.015490] systemd-fstab-generator[1538]: Ignoring "noauto" option for root device
	[ +11.446316] kauditd_printk_skb: 88 callbacks suppressed
	[Jun25 16:51] systemd-fstab-generator[2399]: Ignoring "noauto" option for root device
	[  +0.188722] systemd-fstab-generator[2411]: Ignoring "noauto" option for root device
	[  +0.198333] systemd-fstab-generator[2425]: Ignoring "noauto" option for root device
	[  +0.149886] systemd-fstab-generator[2437]: Ignoring "noauto" option for root device
	[  +0.307005] systemd-fstab-generator[2465]: Ignoring "noauto" option for root device
	[  +7.671887] systemd-fstab-generator[2592]: Ignoring "noauto" option for root device
	[  +0.125347] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.710790] kauditd_printk_skb: 87 callbacks suppressed
	[ +10.846288] systemd-fstab-generator[3370]: Ignoring "noauto" option for root device
	[  +0.812273] kauditd_printk_skb: 17 callbacks suppressed
	[ +15.974902] kauditd_printk_skb: 12 callbacks suppressed
	[  +2.009832] systemd-fstab-generator[3693]: Ignoring "noauto" option for root device
	
	
	==> etcd [5d61afd2cc4c8bd0458bb500b4ed4a32ae4210ac11f14960978760413d53aae9] <==
	{"level":"warn","ts":"2024-06-25T16:51:15.312061Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-25T16:51:15.31439Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.50.163:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.50.163:2380","--initial-cluster=pause-756277=https://192.168.50.163:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.50.163:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.50.163:2380","--name=pause-756277","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trust
ed-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-06-25T16:51:15.31454Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-06-25T16:51:15.3146Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-25T16:51:15.314639Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.50.163:2380"]}
	{"level":"info","ts":"2024-06-25T16:51:15.314705Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-25T16:51:15.316304Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.163:2379"]}
	{"level":"info","ts":"2024-06-25T16:51:15.317307Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-756277","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.50.163:2380"],"listen-peer-urls":["https://192.168.50.163:2380"],"advertise-client-urls":["https://192.168.50.163:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.163:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cl
uster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-06-25T16:51:15.341682Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"23.127279ms"}
	{"level":"info","ts":"2024-06-25T16:51:15.382847Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-06-25T16:51:15.406736Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"c04ffccd875dba59","local-member-id":"7851e28efa6aae4","commit-index":444}
	{"level":"info","ts":"2024-06-25T16:51:15.408306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 switched to configuration voters=()"}
	{"level":"info","ts":"2024-06-25T16:51:15.408362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 became follower at term 2"}
	{"level":"info","ts":"2024-06-25T16:51:15.408387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 7851e28efa6aae4 [peers: [], term: 2, commit: 444, applied: 0, lastindex: 444, lastterm: 2]"}
	{"level":"warn","ts":"2024-06-25T16:51:15.414303Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-06-25T16:51:15.483124Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":417}
	{"level":"info","ts":"2024-06-25T16:51:15.491233Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	
	
	==> etcd [a2ee0a5a949a621f4b6dd6e3cddbb8da84c18c31f77678f5817f92c691f8c04a] <==
	{"level":"info","ts":"2024-06-25T16:51:39.130264Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-25T16:51:39.130479Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-06-25T16:51:52.345052Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.712961ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12314125955741386215 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.163\" mod_revision:428 > success:<request_put:<key:\"/registry/masterleases/192.168.50.163\" value_size:67 lease:3090753918886610405 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.163\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-25T16:51:52.345428Z","caller":"traceutil/trace.go:171","msg":"trace[1228458270] linearizableReadLoop","detail":"{readStateIndex:524; appliedIndex:523; }","duration":"130.507545ms","start":"2024-06-25T16:51:52.214895Z","end":"2024-06-25T16:51:52.345403Z","steps":["trace[1228458270] 'read index received'  (duration: 29.026µs)","trace[1228458270] 'applied index is now lower than readState.Index'  (duration: 130.468755ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-25T16:51:52.345536Z","caller":"traceutil/trace.go:171","msg":"trace[690182632] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"257.418808ms","start":"2024-06-25T16:51:52.088089Z","end":"2024-06-25T16:51:52.345508Z","steps":["trace[690182632] 'process raft request'  (duration: 125.668909ms)","trace[690182632] 'compare'  (duration: 130.587509ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-25T16:51:52.345876Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.966469ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-pause-756277\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2024-06-25T16:51:52.345943Z","caller":"traceutil/trace.go:171","msg":"trace[743665062] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-pause-756277; range_end:; response_count:1; response_revision:477; }","duration":"131.0677ms","start":"2024-06-25T16:51:52.214865Z","end":"2024-06-25T16:51:52.345933Z","steps":["trace[743665062] 'agreement among raft nodes before linearized reading'  (duration: 130.65142ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:51:53.790078Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.366263ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12314125955741386272 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-sw7bk\" mod_revision:404 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-sw7bk\" value_size:1239 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-sw7bk\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-25T16:51:53.790315Z","caller":"traceutil/trace.go:171","msg":"trace[352695321] linearizableReadLoop","detail":"{readStateIndex:526; appliedIndex:525; }","duration":"224.304871ms","start":"2024-06-25T16:51:53.565969Z","end":"2024-06-25T16:51:53.790273Z","steps":["trace[352695321] 'read index received'  (duration: 29.480602ms)","trace[352695321] 'applied index is now lower than readState.Index'  (duration: 194.82311ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-25T16:51:53.790352Z","caller":"traceutil/trace.go:171","msg":"trace[1732289185] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"255.868301ms","start":"2024-06-25T16:51:53.534469Z","end":"2024-06-25T16:51:53.790338Z","steps":["trace[1732289185] 'process raft request'  (duration: 127.102947ms)","trace[1732289185] 'compare'  (duration: 128.22105ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-25T16:51:53.791787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.747062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-06-25T16:51:53.790426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.475926ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" ","response":"range_response_count:1 size:238"}
	{"level":"info","ts":"2024-06-25T16:51:53.792908Z","caller":"traceutil/trace.go:171","msg":"trace[826437060] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner; range_end:; response_count:1; response_revision:479; }","duration":"226.976302ms","start":"2024-06-25T16:51:53.565914Z","end":"2024-06-25T16:51:53.79289Z","steps":["trace[826437060] 'agreement among raft nodes before linearized reading'  (duration: 224.462008ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:51:53.793354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.490743ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" ","response":"range_response_count:1 size:203"}
	{"level":"info","ts":"2024-06-25T16:51:53.793474Z","caller":"traceutil/trace.go:171","msg":"trace[918250213] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:479; }","duration":"176.637538ms","start":"2024-06-25T16:51:53.616826Z","end":"2024-06-25T16:51:53.793463Z","steps":["trace[918250213] 'agreement among raft nodes before linearized reading'  (duration: 176.480647ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-25T16:51:53.79382Z","caller":"traceutil/trace.go:171","msg":"trace[1221698979] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:479; }","duration":"187.79978ms","start":"2024-06-25T16:51:53.606004Z","end":"2024-06-25T16:51:53.793804Z","steps":["trace[1221698979] 'agreement among raft nodes before linearized reading'  (duration: 185.746777ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:51:54.210104Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.045753ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12314125955741386280 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:429 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-25T16:51:54.21043Z","caller":"traceutil/trace.go:171","msg":"trace[859567794] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"387.087462ms","start":"2024-06-25T16:51:53.823329Z","end":"2024-06-25T16:51:54.210416Z","steps":["trace[859567794] 'process raft request'  (duration: 387.020408ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:51:54.212506Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-25T16:51:53.823317Z","time spent":"389.088956ms","remote":"127.0.0.1:49632","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:405 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2024-06-25T16:51:54.210582Z","caller":"traceutil/trace.go:171","msg":"trace[1854482293] linearizableReadLoop","detail":"{readStateIndex:527; appliedIndex:526; }","duration":"390.224942ms","start":"2024-06-25T16:51:53.820344Z","end":"2024-06-25T16:51:54.210569Z","steps":["trace[1854482293] 'read index received'  (duration: 235.648967ms)","trace[1854482293] 'applied index is now lower than readState.Index'  (duration: 154.574857ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-25T16:51:54.211014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"390.65131ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" ","response":"range_response_count:1 size:370"}
	{"level":"info","ts":"2024-06-25T16:51:54.21272Z","caller":"traceutil/trace.go:171","msg":"trace[112953161] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; response_count:1; response_revision:481; }","duration":"392.3807ms","start":"2024-06-25T16:51:53.820325Z","end":"2024-06-25T16:51:54.212706Z","steps":["trace[112953161] 'agreement among raft nodes before linearized reading'  (duration: 390.280541ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:51:54.212752Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-25T16:51:53.820317Z","time spent":"392.424293ms","remote":"127.0.0.1:49572","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":1,"response size":393,"request content":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" "}
	{"level":"info","ts":"2024-06-25T16:51:54.211053Z","caller":"traceutil/trace.go:171","msg":"trace[1500689346] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"393.187669ms","start":"2024-06-25T16:51:53.817853Z","end":"2024-06-25T16:51:54.211041Z","steps":["trace[1500689346] 'process raft request'  (duration: 238.12993ms)","trace[1500689346] 'compare'  (duration: 153.955522ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-25T16:51:54.212929Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-25T16:51:53.81784Z","time spent":"395.053146ms","remote":"127.0.0.1:49930","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:429 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	
	
	==> kernel <==
	 16:51:59 up 3 min,  0 users,  load average: 0.82, 0.34, 0.13
	Linux pause-756277 5.10.207 #1 SMP Mon Jun 24 21:03:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [815bfb4144a979c0d75febf219054e44f58be9cef61ae0e89f475aacac6d1797] <==
	I0625 16:51:40.628215       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0625 16:51:40.700844       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0625 16:51:40.700892       1 policy_source.go:224] refreshing policies
	I0625 16:51:40.724918       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0625 16:51:40.728331       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0625 16:51:40.735695       1 aggregator.go:165] initial CRD sync complete...
	I0625 16:51:40.735741       1 autoregister_controller.go:141] Starting autoregister controller
	I0625 16:51:40.735748       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0625 16:51:40.735762       1 cache.go:39] Caches are synced for autoregister controller
	I0625 16:51:40.768437       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0625 16:51:40.768496       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0625 16:51:40.768503       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0625 16:51:40.776434       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0625 16:51:40.791055       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0625 16:51:40.791252       1 shared_informer.go:320] Caches are synced for configmaps
	I0625 16:51:40.795680       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0625 16:51:40.819914       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0625 16:51:41.568634       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0625 16:51:42.002764       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0625 16:51:42.023425       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0625 16:51:42.061569       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0625 16:51:42.099754       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0625 16:51:42.112974       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0625 16:51:53.533235       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0625 16:51:53.822708       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e] <==
	I0625 16:51:14.944075       1 server.go:148] Version: v1.30.2
	I0625 16:51:14.944322       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0625 16:51:16.051826       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:16.051926       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0625 16:51:16.053340       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0625 16:51:16.081356       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0625 16:51:16.081397       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0625 16:51:16.081603       1 instance.go:299] Using reconciler: lease
	W0625 16:51:16.082828       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0625 16:51:16.082927       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0625 16:51:17.053043       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:17.053208       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:17.084386       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:18.473622       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:18.723460       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:18.799674       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:20.847553       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:21.242534       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:21.782952       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:24.838671       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:25.691591       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:26.562842       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:31.451796       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:33.056927       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:33.740326       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a25c3648d140888f573d7d76f44a4a9b301678446e57e7de9d648a15ef0e6477] <==
	I0625 16:51:53.522383       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0625 16:51:53.525234       1 shared_informer.go:320] Caches are synced for taint
	I0625 16:51:53.525469       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0625 16:51:53.525662       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-756277"
	I0625 16:51:53.525789       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0625 16:51:53.528333       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0625 16:51:53.530233       1 shared_informer.go:320] Caches are synced for endpoint
	I0625 16:51:53.540308       1 shared_informer.go:320] Caches are synced for service account
	I0625 16:51:53.548622       1 shared_informer.go:320] Caches are synced for daemon sets
	I0625 16:51:53.556488       1 shared_informer.go:320] Caches are synced for crt configmap
	I0625 16:51:53.560836       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0625 16:51:53.560875       1 shared_informer.go:320] Caches are synced for attach detach
	I0625 16:51:53.564591       1 shared_informer.go:320] Caches are synced for expand
	I0625 16:51:53.567037       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0625 16:51:53.573465       1 shared_informer.go:320] Caches are synced for HPA
	I0625 16:51:53.639804       1 shared_informer.go:320] Caches are synced for disruption
	I0625 16:51:53.686222       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0625 16:51:53.686533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.018µs"
	I0625 16:51:53.723325       1 shared_informer.go:320] Caches are synced for deployment
	I0625 16:51:53.725324       1 shared_informer.go:320] Caches are synced for resource quota
	I0625 16:51:53.736125       1 shared_informer.go:320] Caches are synced for cronjob
	I0625 16:51:53.736458       1 shared_informer.go:320] Caches are synced for resource quota
	I0625 16:51:54.192782       1 shared_informer.go:320] Caches are synced for garbage collector
	I0625 16:51:54.220201       1 shared_informer.go:320] Caches are synced for garbage collector
	I0625 16:51:54.220306       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [afc4673099e5d0c171adb781d9b2890366b76aebd88506fdcd169c982796c793] <==
	
	
	==> kube-proxy [7f6fba0c0a9f02a1736519655e7546883e15c7aad2270f2c098353e2a7a73987] <==
	I0625 16:49:46.872576       1 server_linux.go:69] "Using iptables proxy"
	I0625 16:49:46.979135       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.163"]
	I0625 16:49:47.139932       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0625 16:49:47.139974       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0625 16:49:47.139990       1 server_linux.go:165] "Using iptables Proxier"
	I0625 16:49:47.150208       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0625 16:49:47.153246       1 server.go:872] "Version info" version="v1.30.2"
	I0625 16:49:47.153265       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:49:47.162417       1 config.go:192] "Starting service config controller"
	I0625 16:49:47.171935       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0625 16:49:47.177328       1 config.go:101] "Starting endpoint slice config controller"
	I0625 16:49:47.177371       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0625 16:49:47.199098       1 config.go:319] "Starting node config controller"
	I0625 16:49:47.199371       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0625 16:49:47.277662       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0625 16:49:47.278369       1 shared_informer.go:320] Caches are synced for service config
	I0625 16:49:47.300999       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f0822288d9ccfea7e829bb4c8c1ccbf4837614f5f2882d194b04550215bcf0d5] <==
	I0625 16:51:16.000656       1 server_linux.go:69] "Using iptables proxy"
	E0625 16:51:26.005695       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-756277\": net/http: TLS handshake timeout"
	E0625 16:51:36.752829       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-756277\": dial tcp 192.168.50.163:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.50.163:43008->192.168.50.163:8443: read: connection reset by peer"
	I0625 16:51:40.745779       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.163"]
	I0625 16:51:40.855039       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0625 16:51:40.855219       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0625 16:51:40.855268       1 server_linux.go:165] "Using iptables Proxier"
	I0625 16:51:40.861284       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0625 16:51:40.861501       1 server.go:872] "Version info" version="v1.30.2"
	I0625 16:51:40.861511       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:51:40.863643       1 config.go:192] "Starting service config controller"
	I0625 16:51:40.863676       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0625 16:51:40.863699       1 config.go:101] "Starting endpoint slice config controller"
	I0625 16:51:40.863703       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0625 16:51:40.864241       1 config.go:319] "Starting node config controller"
	I0625 16:51:40.864267       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0625 16:51:40.964502       1 shared_informer.go:320] Caches are synced for node config
	I0625 16:51:40.964553       1 shared_informer.go:320] Caches are synced for service config
	I0625 16:51:40.964574       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1740598d5cf3e4ded267f89d2b1ce627811652faa5611ed1c54811aac4799b56] <==
	I0625 16:51:38.975649       1 serving.go:380] Generated self-signed cert in-memory
	W0625 16:51:40.679766       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0625 16:51:40.679945       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0625 16:51:40.679965       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0625 16:51:40.680074       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0625 16:51:40.746921       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0625 16:51:40.747084       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:51:40.752730       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0625 16:51:40.753365       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0625 16:51:40.753382       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0625 16:51:40.763542       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0625 16:51:40.864537       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c] <==
	I0625 16:51:16.281198       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.528806    3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37c7640e520f03ea58f37be53c5e026b-k8s-certs\") pod \"kube-controller-manager-pause-756277\" (UID: \"37c7640e520f03ea58f37be53c5e026b\") " pod="kube-system/kube-controller-manager-pause-756277"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.528844    3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37c7640e520f03ea58f37be53c5e026b-kubeconfig\") pod \"kube-controller-manager-pause-756277\" (UID: \"37c7640e520f03ea58f37be53c5e026b\") " pod="kube-system/kube-controller-manager-pause-756277"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.528873    3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7331ec8d20fdfec021ddab1d2b2e4438-kubeconfig\") pod \"kube-scheduler-pause-756277\" (UID: \"7331ec8d20fdfec021ddab1d2b2e4438\") " pod="kube-system/kube-scheduler-pause-756277"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.589909    3377 kubelet_node_status.go:73] "Attempting to register node" node="pause-756277"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: E0625 16:51:37.590917    3377 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.163:8443: connect: connection refused" node="pause-756277"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.737119    3377 scope.go:117] "RemoveContainer" containerID="5d61afd2cc4c8bd0458bb500b4ed4a32ae4210ac11f14960978760413d53aae9"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.746315    3377 scope.go:117] "RemoveContainer" containerID="a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.747260    3377 scope.go:117] "RemoveContainer" containerID="afc4673099e5d0c171adb781d9b2890366b76aebd88506fdcd169c982796c793"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.747371    3377 scope.go:117] "RemoveContainer" containerID="edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: E0625 16:51:37.893014    3377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-756277?timeout=10s\": dial tcp 192.168.50.163:8443: connect: connection refused" interval="800ms"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.995984    3377 kubelet_node_status.go:73] "Attempting to register node" node="pause-756277"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: E0625 16:51:37.997018    3377 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.163:8443: connect: connection refused" node="pause-756277"
	Jun 25 16:51:38 pause-756277 kubelet[3377]: I0625 16:51:38.799133    3377 kubelet_node_status.go:73] "Attempting to register node" node="pause-756277"
	Jun 25 16:51:40 pause-756277 kubelet[3377]: I0625 16:51:40.776803    3377 kubelet_node_status.go:112] "Node was previously registered" node="pause-756277"
	Jun 25 16:51:40 pause-756277 kubelet[3377]: I0625 16:51:40.776892    3377 kubelet_node_status.go:76] "Successfully registered node" node="pause-756277"
	Jun 25 16:51:40 pause-756277 kubelet[3377]: I0625 16:51:40.778454    3377 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 25 16:51:40 pause-756277 kubelet[3377]: I0625 16:51:40.779764    3377 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 25 16:51:40 pause-756277 kubelet[3377]: E0625 16:51:40.860985    3377 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"etcd-pause-756277\" already exists" pod="kube-system/etcd-pause-756277"
	Jun 25 16:51:41 pause-756277 kubelet[3377]: I0625 16:51:41.249950    3377 apiserver.go:52] "Watching apiserver"
	Jun 25 16:51:41 pause-756277 kubelet[3377]: I0625 16:51:41.255298    3377 topology_manager.go:215] "Topology Admit Handler" podUID="dc85c133-117a-4389-9f53-32d82b3e40ce" podNamespace="kube-system" podName="kube-proxy-k2flf"
	Jun 25 16:51:41 pause-756277 kubelet[3377]: I0625 16:51:41.255491    3377 topology_manager.go:215] "Topology Admit Handler" podUID="8ddacba2-d039-40c7-8731-ba8e5707cfda" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jsf7r"
	Jun 25 16:51:41 pause-756277 kubelet[3377]: E0625 16:51:41.266038    3377 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-756277\" already exists" pod="kube-system/kube-controller-manager-pause-756277"
	Jun 25 16:51:41 pause-756277 kubelet[3377]: I0625 16:51:41.287366    3377 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 25 16:51:41 pause-756277 kubelet[3377]: I0625 16:51:41.317455    3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc85c133-117a-4389-9f53-32d82b3e40ce-lib-modules\") pod \"kube-proxy-k2flf\" (UID: \"dc85c133-117a-4389-9f53-32d82b3e40ce\") " pod="kube-system/kube-proxy-k2flf"
	Jun 25 16:51:41 pause-756277 kubelet[3377]: I0625 16:51:41.317635    3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc85c133-117a-4389-9f53-32d82b3e40ce-xtables-lock\") pod \"kube-proxy-k2flf\" (UID: \"dc85c133-117a-4389-9f53-32d82b3e40ce\") " pod="kube-system/kube-proxy-k2flf"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-756277 -n pause-756277
helpers_test.go:261: (dbg) Run:  kubectl --context pause-756277 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-756277 -n pause-756277
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-756277 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-756277 logs -n 25: (1.457034449s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                    |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-514698 sudo cat                  | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                      | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | cri-dockerd --version                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                      | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | systemctl status containerd                |                           |         |         |                     |                     |
	|         | --all --full --no-pager                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                      | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | systemctl cat containerd                   |                           |         |         |                     |                     |
	|         | --no-pager                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo cat                  | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | /lib/systemd/system/containerd.service     |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo cat                  | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | /etc/containerd/config.toml                |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                      | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | containerd config dump                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                      | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | systemctl status crio --all                |                           |         |         |                     |                     |
	|         | --full --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo                      | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | systemctl cat crio --no-pager              |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo find                 | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | /etc/crio -type f -exec sh -c              |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                       |                           |         |         |                     |                     |
	| ssh     | -p cilium-514698 sudo crio                 | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC |                     |
	|         | config                                     |                           |         |         |                     |                     |
	| delete  | -p cilium-514698                           | cilium-514698             | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC | 25 Jun 24 16:49 UTC |
	| stop    | -p kubernetes-upgrade-497568               | kubernetes-upgrade-497568 | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC | 25 Jun 24 16:49 UTC |
	| start   | -p kubernetes-upgrade-497568               | kubernetes-upgrade-497568 | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC | 25 Jun 24 16:50 UTC |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p cert-expiration-076008                  | cert-expiration-076008    | jenkins | v1.33.1 | 25 Jun 24 16:49 UTC | 25 Jun 24 16:50 UTC |
	|         | --memory=2048                              |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                       |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-759584                | force-systemd-env-759584  | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC | 25 Jun 24 16:50 UTC |
	| start   | -p force-systemd-flag-740596               | force-systemd-flag-740596 | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC | 25 Jun 24 16:51 UTC |
	|         | --memory=2048 --force-systemd              |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p pause-756277                            | pause-756277              | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC | 25 Jun 24 16:51 UTC |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-497568               | kubernetes-upgrade-497568 | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0               |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-497568               | kubernetes-upgrade-497568 | jenkins | v1.33.1 | 25 Jun 24 16:50 UTC | 25 Jun 24 16:51 UTC |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-740596 ssh cat          | force-systemd-flag-740596 | jenkins | v1.33.1 | 25 Jun 24 16:51 UTC | 25 Jun 24 16:51 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf         |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-740596               | force-systemd-flag-740596 | jenkins | v1.33.1 | 25 Jun 24 16:51 UTC | 25 Jun 24 16:51 UTC |
	| start   | -p cert-options-742979                     | cert-options-742979       | jenkins | v1.33.1 | 25 Jun 24 16:51 UTC |                     |
	|         | --memory=2048                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                  |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15              |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com           |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                      |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-497568               | kubernetes-upgrade-497568 | jenkins | v1.33.1 | 25 Jun 24 16:51 UTC | 25 Jun 24 16:51 UTC |
	| start   | -p old-k8s-version-462347                  | old-k8s-version-462347    | jenkins | v1.33.1 | 25 Jun 24 16:51 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true              |                           |         |         |                     |                     |
	|         | --kvm-network=default                      |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system              |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                    |                           |         |         |                     |                     |
	|         | --keep-context=false                       |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0               |                           |         |         |                     |                     |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/25 16:51:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0625 16:51:43.429508   67969 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:51:43.429718   67969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:51:43.429727   67969 out.go:304] Setting ErrFile to fd 2...
	I0625 16:51:43.429731   67969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:51:43.429912   67969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:51:43.430412   67969 out.go:298] Setting JSON to false
	I0625 16:51:43.431354   67969 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9247,"bootTime":1719325056,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 16:51:43.431408   67969 start.go:139] virtualization: kvm guest
	I0625 16:51:43.433659   67969 out.go:177] * [old-k8s-version-462347] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0625 16:51:43.435144   67969 out.go:177]   - MINIKUBE_LOCATION=19128
	I0625 16:51:43.435162   67969 notify.go:220] Checking for updates...
	I0625 16:51:43.437619   67969 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 16:51:43.438921   67969 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 16:51:43.440254   67969 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:51:43.441509   67969 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0625 16:51:43.442912   67969 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0625 16:51:43.444449   67969 config.go:182] Loaded profile config "cert-expiration-076008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:51:43.444562   67969 config.go:182] Loaded profile config "cert-options-742979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:51:43.444719   67969 config.go:182] Loaded profile config "pause-756277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:51:43.444814   67969 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 16:51:43.480208   67969 out.go:177] * Using the kvm2 driver based on user configuration
	I0625 16:51:43.481426   67969 start.go:297] selected driver: kvm2
	I0625 16:51:43.481441   67969 start.go:901] validating driver "kvm2" against <nil>
	I0625 16:51:43.481455   67969 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0625 16:51:43.482141   67969 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:51:43.482201   67969 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19128-13846/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0625 16:51:43.497309   67969 install.go:137] /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0625 16:51:43.497376   67969 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0625 16:51:43.497597   67969 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 16:51:43.497667   67969 cni.go:84] Creating CNI manager for ""
	I0625 16:51:43.497684   67969 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 16:51:43.497696   67969 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0625 16:51:43.497755   67969 start.go:340] cluster config:
	{Name:old-k8s-version-462347 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-462347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:51:43.497881   67969 iso.go:125] acquiring lock: {Name:mk76df652d5e768afc73443035d5ecb8b75ed16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 16:51:43.500654   67969 out.go:177] * Starting "old-k8s-version-462347" primary control-plane node in "old-k8s-version-462347" cluster
	I0625 16:51:43.501901   67969 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0625 16:51:43.501939   67969 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0625 16:51:43.501952   67969 cache.go:56] Caching tarball of preloaded images
	I0625 16:51:43.502044   67969 preload.go:173] Found /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0625 16:51:43.502058   67969 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0625 16:51:43.502169   67969 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/old-k8s-version-462347/config.json ...
	I0625 16:51:43.502190   67969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/old-k8s-version-462347/config.json: {Name:mk65a4e524b9b7230e9ec3336d3ee84ebe9e5eda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:43.502342   67969 start.go:360] acquireMachinesLock for old-k8s-version-462347: {Name:mk2a1ebee912b37a2b68bf2f76641f82f8fc2fcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0625 16:51:44.924185   67969 start.go:364] duration metric: took 1.421787034s to acquireMachinesLock for "old-k8s-version-462347"
	I0625 16:51:44.924248   67969 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-462347 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-462347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 16:51:44.924356   67969 start.go:125] createHost starting for "" (driver="kvm2")
	I0625 16:51:43.287403   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.288096   67510 main.go:141] libmachine: (cert-options-742979) Found IP for machine: 192.168.83.28
	I0625 16:51:43.288117   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has current primary IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.288125   67510 main.go:141] libmachine: (cert-options-742979) Reserving static IP address...
	I0625 16:51:43.288490   67510 main.go:141] libmachine: (cert-options-742979) DBG | unable to find host DHCP lease matching {name: "cert-options-742979", mac: "52:54:00:b5:c8:1f", ip: "192.168.83.28"} in network mk-cert-options-742979
	I0625 16:51:43.363354   67510 main.go:141] libmachine: (cert-options-742979) DBG | Getting to WaitForSSH function...
	I0625 16:51:43.363370   67510 main.go:141] libmachine: (cert-options-742979) Reserved static IP address: 192.168.83.28
	I0625 16:51:43.363380   67510 main.go:141] libmachine: (cert-options-742979) Waiting for SSH to be available...
	I0625 16:51:43.366211   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.366693   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:43.366709   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.366845   67510 main.go:141] libmachine: (cert-options-742979) DBG | Using SSH client type: external
	I0625 16:51:43.366859   67510 main.go:141] libmachine: (cert-options-742979) DBG | Using SSH private key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/id_rsa (-rw-------)
	I0625 16:51:43.366892   67510 main.go:141] libmachine: (cert-options-742979) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0625 16:51:43.366900   67510 main.go:141] libmachine: (cert-options-742979) DBG | About to run SSH command:
	I0625 16:51:43.366910   67510 main.go:141] libmachine: (cert-options-742979) DBG | exit 0
	I0625 16:51:43.498776   67510 main.go:141] libmachine: (cert-options-742979) DBG | SSH cmd err, output: <nil>: 
	I0625 16:51:43.498992   67510 main.go:141] libmachine: (cert-options-742979) KVM machine creation complete!
	I0625 16:51:43.499338   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetConfigRaw
	I0625 16:51:43.500318   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:43.500526   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:43.500669   67510 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0625 16:51:43.500679   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetState
	I0625 16:51:43.502386   67510 main.go:141] libmachine: Detecting operating system of created instance...
	I0625 16:51:43.502395   67510 main.go:141] libmachine: Waiting for SSH to be available...
	I0625 16:51:43.502401   67510 main.go:141] libmachine: Getting to WaitForSSH function...
	I0625 16:51:43.502408   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:43.504645   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.505005   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:43.505040   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.505148   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:43.505306   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.505443   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.505553   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:43.505696   67510 main.go:141] libmachine: Using SSH client type: native
	I0625 16:51:43.505859   67510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.83.28 22 <nil> <nil>}
	I0625 16:51:43.505864   67510 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0625 16:51:43.609541   67510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 16:51:43.609550   67510 main.go:141] libmachine: Detecting the provisioner...
	I0625 16:51:43.609556   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:43.612297   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.612650   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:43.612672   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.612813   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:43.612943   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.613068   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.613159   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:43.613305   67510 main.go:141] libmachine: Using SSH client type: native
	I0625 16:51:43.613453   67510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.83.28 22 <nil> <nil>}
	I0625 16:51:43.613458   67510 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0625 16:51:43.722803   67510 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0625 16:51:43.722855   67510 main.go:141] libmachine: found compatible host: buildroot
	I0625 16:51:43.722860   67510 main.go:141] libmachine: Provisioning with buildroot...
	I0625 16:51:43.722866   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetMachineName
	I0625 16:51:43.723096   67510 buildroot.go:166] provisioning hostname "cert-options-742979"
	I0625 16:51:43.723128   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetMachineName
	I0625 16:51:43.723295   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:43.725903   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.726254   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:43.726275   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.726386   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:43.726572   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.726713   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.726803   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:43.726940   67510 main.go:141] libmachine: Using SSH client type: native
	I0625 16:51:43.727112   67510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.83.28 22 <nil> <nil>}
	I0625 16:51:43.727118   67510 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-742979 && echo "cert-options-742979" | sudo tee /etc/hostname
	I0625 16:51:43.849085   67510 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-742979
	
	I0625 16:51:43.849104   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:43.851870   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.852288   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:43.852308   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.852490   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:43.852679   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.852819   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:43.852986   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:43.853137   67510 main.go:141] libmachine: Using SSH client type: native
	I0625 16:51:43.853331   67510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.83.28 22 <nil> <nil>}
	I0625 16:51:43.853342   67510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-742979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-742979/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-742979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0625 16:51:43.966932   67510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0625 16:51:43.966952   67510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19128-13846/.minikube CaCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19128-13846/.minikube}
	I0625 16:51:43.966986   67510 buildroot.go:174] setting up certificates
	I0625 16:51:43.966997   67510 provision.go:84] configureAuth start
	I0625 16:51:43.967005   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetMachineName
	I0625 16:51:43.967272   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetIP
	I0625 16:51:43.969653   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.970018   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:43.970044   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.970183   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:43.972392   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.972697   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:43.972717   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:43.972858   67510 provision.go:143] copyHostCerts
	I0625 16:51:43.972916   67510 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem, removing ...
	I0625 16:51:43.972928   67510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem
	I0625 16:51:43.972988   67510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/ca.pem (1078 bytes)
	I0625 16:51:43.973075   67510 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem, removing ...
	I0625 16:51:43.973078   67510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem
	I0625 16:51:43.973099   67510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/cert.pem (1123 bytes)
	I0625 16:51:43.973158   67510 exec_runner.go:144] found /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem, removing ...
	I0625 16:51:43.973160   67510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem
	I0625 16:51:43.973184   67510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19128-13846/.minikube/key.pem (1679 bytes)
	I0625 16:51:43.973237   67510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem org=jenkins.cert-options-742979 san=[127.0.0.1 192.168.83.28 cert-options-742979 localhost minikube]
	I0625 16:51:44.226766   67510 provision.go:177] copyRemoteCerts
	I0625 16:51:44.226804   67510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0625 16:51:44.226824   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:44.229352   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.229698   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.229724   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.229859   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:44.230041   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.230178   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:44.230301   67510 sshutil.go:53] new ssh client: &{IP:192.168.83.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/id_rsa Username:docker}
	I0625 16:51:44.314170   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0625 16:51:44.337862   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0625 16:51:44.360773   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0625 16:51:44.385688   67510 provision.go:87] duration metric: took 418.680723ms to configureAuth
	I0625 16:51:44.385704   67510 buildroot.go:189] setting minikube options for container-runtime
	I0625 16:51:44.385848   67510 config.go:182] Loaded profile config "cert-options-742979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:51:44.385978   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:44.388872   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.389215   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.389239   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.389419   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:44.389642   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.389802   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.389961   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:44.390132   67510 main.go:141] libmachine: Using SSH client type: native
	I0625 16:51:44.390294   67510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.83.28 22 <nil> <nil>}
	I0625 16:51:44.390303   67510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0625 16:51:44.678859   67510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0625 16:51:44.678874   67510 main.go:141] libmachine: Checking connection to Docker...
	I0625 16:51:44.678879   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetURL
	I0625 16:51:44.680144   67510 main.go:141] libmachine: (cert-options-742979) DBG | Using libvirt version 6000000
	I0625 16:51:44.682378   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.682712   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.682740   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.682845   67510 main.go:141] libmachine: Docker is up and running!
	I0625 16:51:44.682855   67510 main.go:141] libmachine: Reticulating splines...
	I0625 16:51:44.682860   67510 client.go:171] duration metric: took 24.522308355s to LocalClient.Create
	I0625 16:51:44.682878   67510 start.go:167] duration metric: took 24.522359045s to libmachine.API.Create "cert-options-742979"
	I0625 16:51:44.682883   67510 start.go:293] postStartSetup for "cert-options-742979" (driver="kvm2")
	I0625 16:51:44.682891   67510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0625 16:51:44.682903   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:44.683158   67510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0625 16:51:44.683180   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:44.685309   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.685654   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.685689   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.685822   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:44.686010   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.686190   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:44.686357   67510 sshutil.go:53] new ssh client: &{IP:192.168.83.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/id_rsa Username:docker}
	I0625 16:51:44.770702   67510 ssh_runner.go:195] Run: cat /etc/os-release
	I0625 16:51:44.775141   67510 info.go:137] Remote host: Buildroot 2023.02.9
	I0625 16:51:44.775155   67510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/addons for local assets ...
	I0625 16:51:44.775223   67510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19128-13846/.minikube/files for local assets ...
	I0625 16:51:44.775338   67510 filesync.go:149] local asset: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem -> 212392.pem in /etc/ssl/certs
	I0625 16:51:44.775456   67510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0625 16:51:44.784879   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:51:44.808849   67510 start.go:296] duration metric: took 125.956993ms for postStartSetup
	I0625 16:51:44.808893   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetConfigRaw
	I0625 16:51:44.809468   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetIP
	I0625 16:51:44.812126   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.812455   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.812480   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.812735   67510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/config.json ...
	I0625 16:51:44.812939   67510 start.go:128] duration metric: took 24.669466141s to createHost
	I0625 16:51:44.812954   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:44.815103   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.815398   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.815414   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.815490   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:44.815667   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.815842   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.815987   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:44.816166   67510 main.go:141] libmachine: Using SSH client type: native
	I0625 16:51:44.816313   67510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.83.28 22 <nil> <nil>}
	I0625 16:51:44.816318   67510 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0625 16:51:44.923959   67510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719334304.900827745
	
	I0625 16:51:44.923971   67510 fix.go:216] guest clock: 1719334304.900827745
	I0625 16:51:44.923979   67510 fix.go:229] Guest: 2024-06-25 16:51:44.900827745 +0000 UTC Remote: 2024-06-25 16:51:44.812944397 +0000 UTC m=+24.772173068 (delta=87.883348ms)
	I0625 16:51:44.924026   67510 fix.go:200] guest clock delta is within tolerance: 87.883348ms
	I0625 16:51:44.924031   67510 start.go:83] releasing machines lock for "cert-options-742979", held for 24.780626337s
	I0625 16:51:44.924054   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:44.924325   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetIP
	I0625 16:51:44.927103   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.927468   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.927488   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.927632   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:44.928152   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:44.928324   67510 main.go:141] libmachine: (cert-options-742979) Calling .DriverName
	I0625 16:51:44.928404   67510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0625 16:51:44.928444   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:44.928541   67510 ssh_runner.go:195] Run: cat /version.json
	I0625 16:51:44.928559   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHHostname
	I0625 16:51:44.931726   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.931963   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.932100   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.932132   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.932249   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:44.932270   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:44.932320   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:44.932477   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.932492   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHPort
	I0625 16:51:44.932601   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHKeyPath
	I0625 16:51:44.932642   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:44.932703   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetSSHUsername
	I0625 16:51:44.932783   67510 sshutil.go:53] new ssh client: &{IP:192.168.83.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/id_rsa Username:docker}
	I0625 16:51:44.932869   67510 sshutil.go:53] new ssh client: &{IP:192.168.83.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/cert-options-742979/id_rsa Username:docker}
	I0625 16:51:45.039524   67510 ssh_runner.go:195] Run: systemctl --version
	I0625 16:51:45.046601   67510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0625 16:51:45.207453   67510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0625 16:51:45.216059   67510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0625 16:51:45.216113   67510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0625 16:51:45.238074   67510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0625 16:51:45.238085   67510 start.go:494] detecting cgroup driver to use...
	I0625 16:51:45.238165   67510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0625 16:51:45.257881   67510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0625 16:51:45.274535   67510 docker.go:217] disabling cri-docker service (if available) ...
	I0625 16:51:45.274579   67510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0625 16:51:45.294849   67510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0625 16:51:45.312759   67510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0625 16:51:45.439612   67510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0625 16:51:45.605628   67510 docker.go:233] disabling docker service ...
	I0625 16:51:45.605682   67510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0625 16:51:45.623610   67510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0625 16:51:45.637774   67510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0625 16:51:45.791282   67510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0625 16:51:45.914503   67510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0625 16:51:45.929677   67510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0625 16:51:45.949988   67510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0625 16:51:45.950059   67510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:45.961071   67510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0625 16:51:45.961110   67510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:45.971851   67510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:45.982435   67510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:45.993177   67510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0625 16:51:46.004128   67510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:46.014665   67510 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:46.032763   67510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0625 16:51:46.045076   67510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0625 16:51:46.056565   67510 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0625 16:51:46.056609   67510 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0625 16:51:46.072971   67510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0625 16:51:46.083694   67510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:51:46.213002   67510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0625 16:51:46.363181   67510 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0625 16:51:46.363247   67510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0625 16:51:46.368170   67510 start.go:562] Will wait 60s for crictl version
	I0625 16:51:46.368214   67510 ssh_runner.go:195] Run: which crictl
	I0625 16:51:46.372091   67510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0625 16:51:46.416465   67510 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0625 16:51:46.416626   67510 ssh_runner.go:195] Run: crio --version
	I0625 16:51:46.448357   67510 ssh_runner.go:195] Run: crio --version
	I0625 16:51:46.483901   67510 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0625 16:51:44.200835   66820 pod_ready.go:102] pod "etcd-pause-756277" in "kube-system" namespace has status "Ready":"False"
	I0625 16:51:46.201285   66820 pod_ready.go:102] pod "etcd-pause-756277" in "kube-system" namespace has status "Ready":"False"
	I0625 16:51:44.926581   67969 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0625 16:51:44.926749   67969 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19128-13846/.minikube/bin/docker-machine-driver-kvm2
	I0625 16:51:44.926793   67969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:51:44.943707   67969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46287
	I0625 16:51:44.944112   67969 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:51:44.944849   67969 main.go:141] libmachine: Using API Version  1
	I0625 16:51:44.944872   67969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:51:44.945231   67969 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:51:44.945429   67969 main.go:141] libmachine: (old-k8s-version-462347) Calling .GetMachineName
	I0625 16:51:44.945565   67969 main.go:141] libmachine: (old-k8s-version-462347) Calling .DriverName
	I0625 16:51:44.945713   67969 start.go:159] libmachine.API.Create for "old-k8s-version-462347" (driver="kvm2")
	I0625 16:51:44.945739   67969 client.go:168] LocalClient.Create starting
	I0625 16:51:44.945776   67969 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem
	I0625 16:51:44.945813   67969 main.go:141] libmachine: Decoding PEM data...
	I0625 16:51:44.945832   67969 main.go:141] libmachine: Parsing certificate...
	I0625 16:51:44.945906   67969 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem
	I0625 16:51:44.945931   67969 main.go:141] libmachine: Decoding PEM data...
	I0625 16:51:44.945952   67969 main.go:141] libmachine: Parsing certificate...
	I0625 16:51:44.945976   67969 main.go:141] libmachine: Running pre-create checks...
	I0625 16:51:44.945995   67969 main.go:141] libmachine: (old-k8s-version-462347) Calling .PreCreateCheck
	I0625 16:51:44.946433   67969 main.go:141] libmachine: (old-k8s-version-462347) Calling .GetConfigRaw
	I0625 16:51:44.946936   67969 main.go:141] libmachine: Creating machine...
	I0625 16:51:44.946955   67969 main.go:141] libmachine: (old-k8s-version-462347) Calling .Create
	I0625 16:51:44.947101   67969 main.go:141] libmachine: (old-k8s-version-462347) Creating KVM machine...
	I0625 16:51:44.948379   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | found existing default KVM network
	I0625 16:51:44.949923   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:44.949766   68009 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0d0}
	I0625 16:51:44.949948   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | created network xml: 
	I0625 16:51:44.949960   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | <network>
	I0625 16:51:44.949971   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |   <name>mk-old-k8s-version-462347</name>
	I0625 16:51:44.949979   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |   <dns enable='no'/>
	I0625 16:51:44.949991   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |   
	I0625 16:51:44.950014   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0625 16:51:44.950029   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |     <dhcp>
	I0625 16:51:44.950040   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0625 16:51:44.950052   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |     </dhcp>
	I0625 16:51:44.950061   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |   </ip>
	I0625 16:51:44.950069   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG |   
	I0625 16:51:44.950081   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | </network>
	I0625 16:51:44.950091   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | 
	I0625 16:51:44.955636   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | trying to create private KVM network mk-old-k8s-version-462347 192.168.39.0/24...
	I0625 16:51:45.030929   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | private KVM network mk-old-k8s-version-462347 192.168.39.0/24 created
	I0625 16:51:45.030963   67969 main.go:141] libmachine: (old-k8s-version-462347) Setting up store path in /home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347 ...
	I0625 16:51:45.030979   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:45.030907   68009 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:51:45.031001   67969 main.go:141] libmachine: (old-k8s-version-462347) Building disk image from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso
	I0625 16:51:45.031077   67969 main.go:141] libmachine: (old-k8s-version-462347) Downloading /home/jenkins/minikube-integration/19128-13846/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso...
	I0625 16:51:45.295483   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:45.295366   68009 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347/id_rsa...
	I0625 16:51:45.606488   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:45.606365   68009 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347/old-k8s-version-462347.rawdisk...
	I0625 16:51:45.606517   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Writing magic tar header
	I0625 16:51:45.606530   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Writing SSH key tar header
	I0625 16:51:45.606620   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:45.606558   68009 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347 ...
	I0625 16:51:45.606709   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347
	I0625 16:51:45.606736   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube/machines
	I0625 16:51:45.606755   67969 main.go:141] libmachine: (old-k8s-version-462347) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347 (perms=drwx------)
	I0625 16:51:45.606771   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 16:51:45.606782   67969 main.go:141] libmachine: (old-k8s-version-462347) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube/machines (perms=drwxr-xr-x)
	I0625 16:51:45.606795   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19128-13846
	I0625 16:51:45.606807   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0625 16:51:45.606821   67969 main.go:141] libmachine: (old-k8s-version-462347) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846/.minikube (perms=drwxr-xr-x)
	I0625 16:51:45.606836   67969 main.go:141] libmachine: (old-k8s-version-462347) Setting executable bit set on /home/jenkins/minikube-integration/19128-13846 (perms=drwxrwxr-x)
	I0625 16:51:45.606849   67969 main.go:141] libmachine: (old-k8s-version-462347) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0625 16:51:45.606864   67969 main.go:141] libmachine: (old-k8s-version-462347) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0625 16:51:45.606872   67969 main.go:141] libmachine: (old-k8s-version-462347) Creating domain...
	I0625 16:51:45.606884   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Checking permissions on dir: /home/jenkins
	I0625 16:51:45.606916   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Checking permissions on dir: /home
	I0625 16:51:45.606928   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | Skipping /home - not owner
	I0625 16:51:45.608102   67969 main.go:141] libmachine: (old-k8s-version-462347) define libvirt domain using xml: 
	I0625 16:51:45.608123   67969 main.go:141] libmachine: (old-k8s-version-462347) <domain type='kvm'>
	I0625 16:51:45.608131   67969 main.go:141] libmachine: (old-k8s-version-462347)   <name>old-k8s-version-462347</name>
	I0625 16:51:45.608140   67969 main.go:141] libmachine: (old-k8s-version-462347)   <memory unit='MiB'>2200</memory>
	I0625 16:51:45.608149   67969 main.go:141] libmachine: (old-k8s-version-462347)   <vcpu>2</vcpu>
	I0625 16:51:45.608156   67969 main.go:141] libmachine: (old-k8s-version-462347)   <features>
	I0625 16:51:45.608169   67969 main.go:141] libmachine: (old-k8s-version-462347)     <acpi/>
	I0625 16:51:45.608176   67969 main.go:141] libmachine: (old-k8s-version-462347)     <apic/>
	I0625 16:51:45.608190   67969 main.go:141] libmachine: (old-k8s-version-462347)     <pae/>
	I0625 16:51:45.608197   67969 main.go:141] libmachine: (old-k8s-version-462347)     
	I0625 16:51:45.608207   67969 main.go:141] libmachine: (old-k8s-version-462347)   </features>
	I0625 16:51:45.608214   67969 main.go:141] libmachine: (old-k8s-version-462347)   <cpu mode='host-passthrough'>
	I0625 16:51:45.608219   67969 main.go:141] libmachine: (old-k8s-version-462347)   
	I0625 16:51:45.608226   67969 main.go:141] libmachine: (old-k8s-version-462347)   </cpu>
	I0625 16:51:45.608255   67969 main.go:141] libmachine: (old-k8s-version-462347)   <os>
	I0625 16:51:45.608277   67969 main.go:141] libmachine: (old-k8s-version-462347)     <type>hvm</type>
	I0625 16:51:45.608288   67969 main.go:141] libmachine: (old-k8s-version-462347)     <boot dev='cdrom'/>
	I0625 16:51:45.608297   67969 main.go:141] libmachine: (old-k8s-version-462347)     <boot dev='hd'/>
	I0625 16:51:45.608311   67969 main.go:141] libmachine: (old-k8s-version-462347)     <bootmenu enable='no'/>
	I0625 16:51:45.608322   67969 main.go:141] libmachine: (old-k8s-version-462347)   </os>
	I0625 16:51:45.608333   67969 main.go:141] libmachine: (old-k8s-version-462347)   <devices>
	I0625 16:51:45.608345   67969 main.go:141] libmachine: (old-k8s-version-462347)     <disk type='file' device='cdrom'>
	I0625 16:51:45.608361   67969 main.go:141] libmachine: (old-k8s-version-462347)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347/boot2docker.iso'/>
	I0625 16:51:45.608375   67969 main.go:141] libmachine: (old-k8s-version-462347)       <target dev='hdc' bus='scsi'/>
	I0625 16:51:45.608387   67969 main.go:141] libmachine: (old-k8s-version-462347)       <readonly/>
	I0625 16:51:45.608397   67969 main.go:141] libmachine: (old-k8s-version-462347)     </disk>
	I0625 16:51:45.608408   67969 main.go:141] libmachine: (old-k8s-version-462347)     <disk type='file' device='disk'>
	I0625 16:51:45.608425   67969 main.go:141] libmachine: (old-k8s-version-462347)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0625 16:51:45.608446   67969 main.go:141] libmachine: (old-k8s-version-462347)       <source file='/home/jenkins/minikube-integration/19128-13846/.minikube/machines/old-k8s-version-462347/old-k8s-version-462347.rawdisk'/>
	I0625 16:51:45.608459   67969 main.go:141] libmachine: (old-k8s-version-462347)       <target dev='hda' bus='virtio'/>
	I0625 16:51:45.608471   67969 main.go:141] libmachine: (old-k8s-version-462347)     </disk>
	I0625 16:51:45.608484   67969 main.go:141] libmachine: (old-k8s-version-462347)     <interface type='network'>
	I0625 16:51:45.608497   67969 main.go:141] libmachine: (old-k8s-version-462347)       <source network='mk-old-k8s-version-462347'/>
	I0625 16:51:45.608525   67969 main.go:141] libmachine: (old-k8s-version-462347)       <model type='virtio'/>
	I0625 16:51:45.608562   67969 main.go:141] libmachine: (old-k8s-version-462347)     </interface>
	I0625 16:51:45.608576   67969 main.go:141] libmachine: (old-k8s-version-462347)     <interface type='network'>
	I0625 16:51:45.608587   67969 main.go:141] libmachine: (old-k8s-version-462347)       <source network='default'/>
	I0625 16:51:45.608597   67969 main.go:141] libmachine: (old-k8s-version-462347)       <model type='virtio'/>
	I0625 16:51:45.608605   67969 main.go:141] libmachine: (old-k8s-version-462347)     </interface>
	I0625 16:51:45.608610   67969 main.go:141] libmachine: (old-k8s-version-462347)     <serial type='pty'>
	I0625 16:51:45.608622   67969 main.go:141] libmachine: (old-k8s-version-462347)       <target port='0'/>
	I0625 16:51:45.608633   67969 main.go:141] libmachine: (old-k8s-version-462347)     </serial>
	I0625 16:51:45.608641   67969 main.go:141] libmachine: (old-k8s-version-462347)     <console type='pty'>
	I0625 16:51:45.608654   67969 main.go:141] libmachine: (old-k8s-version-462347)       <target type='serial' port='0'/>
	I0625 16:51:45.608665   67969 main.go:141] libmachine: (old-k8s-version-462347)     </console>
	I0625 16:51:45.608677   67969 main.go:141] libmachine: (old-k8s-version-462347)     <rng model='virtio'>
	I0625 16:51:45.608689   67969 main.go:141] libmachine: (old-k8s-version-462347)       <backend model='random'>/dev/random</backend>
	I0625 16:51:45.608707   67969 main.go:141] libmachine: (old-k8s-version-462347)     </rng>
	I0625 16:51:45.608715   67969 main.go:141] libmachine: (old-k8s-version-462347)     
	I0625 16:51:45.608727   67969 main.go:141] libmachine: (old-k8s-version-462347)     
	I0625 16:51:45.608739   67969 main.go:141] libmachine: (old-k8s-version-462347)   </devices>
	I0625 16:51:45.608754   67969 main.go:141] libmachine: (old-k8s-version-462347) </domain>
	I0625 16:51:45.608767   67969 main.go:141] libmachine: (old-k8s-version-462347) 
	I0625 16:51:45.612847   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:01:69:60 in network default
	I0625 16:51:45.613504   67969 main.go:141] libmachine: (old-k8s-version-462347) Ensuring networks are active...
	I0625 16:51:45.613527   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:45.614278   67969 main.go:141] libmachine: (old-k8s-version-462347) Ensuring network default is active
	I0625 16:51:45.614865   67969 main.go:141] libmachine: (old-k8s-version-462347) Ensuring network mk-old-k8s-version-462347 is active
	I0625 16:51:45.615553   67969 main.go:141] libmachine: (old-k8s-version-462347) Getting domain xml...
	I0625 16:51:45.616178   67969 main.go:141] libmachine: (old-k8s-version-462347) Creating domain...
	I0625 16:51:46.957433   67969 main.go:141] libmachine: (old-k8s-version-462347) Waiting to get IP...
	I0625 16:51:46.958505   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:46.959067   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:46.959104   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:46.959055   68009 retry.go:31] will retry after 223.641081ms: waiting for machine to come up
	I0625 16:51:47.184757   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:47.185427   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:47.185454   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:47.185349   68009 retry.go:31] will retry after 246.556335ms: waiting for machine to come up
	I0625 16:51:47.433988   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:47.434905   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:47.434935   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:47.434851   68009 retry.go:31] will retry after 303.860912ms: waiting for machine to come up
	I0625 16:51:47.740500   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:47.741087   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:47.741112   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:47.741041   68009 retry.go:31] will retry after 411.392596ms: waiting for machine to come up
	I0625 16:51:48.153766   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:48.154313   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:48.154336   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:48.154223   68009 retry.go:31] will retry after 691.010311ms: waiting for machine to come up
	I0625 16:51:46.485144   67510 main.go:141] libmachine: (cert-options-742979) Calling .GetIP
	I0625 16:51:46.488663   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:46.489126   67510 main.go:141] libmachine: (cert-options-742979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:c8:1f", ip: ""} in network mk-cert-options-742979: {Iface:virbr4 ExpiryTime:2024-06-25 17:51:34 +0000 UTC Type:0 Mac:52:54:00:b5:c8:1f Iaid: IPaddr:192.168.83.28 Prefix:24 Hostname:cert-options-742979 Clientid:01:52:54:00:b5:c8:1f}
	I0625 16:51:46.489143   67510 main.go:141] libmachine: (cert-options-742979) DBG | domain cert-options-742979 has defined IP address 192.168.83.28 and MAC address 52:54:00:b5:c8:1f in network mk-cert-options-742979
	I0625 16:51:46.489301   67510 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0625 16:51:46.493495   67510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 16:51:46.506558   67510 kubeadm.go:877] updating cluster {Name:cert-options-742979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.2 ClusterName:cert-options-742979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.28 Port:8555 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0625 16:51:46.506669   67510 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 16:51:46.506721   67510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:51:46.540006   67510 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0625 16:51:46.540055   67510 ssh_runner.go:195] Run: which lz4
	I0625 16:51:46.544239   67510 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0625 16:51:46.548466   67510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0625 16:51:46.548482   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0625 16:51:47.993202   67510 crio.go:462] duration metric: took 1.448983606s to copy over tarball
	I0625 16:51:47.993286   67510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0625 16:51:50.283910   67510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.290597092s)
	I0625 16:51:50.283927   67510 crio.go:469] duration metric: took 2.290707531s to extract the tarball
	I0625 16:51:50.283934   67510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0625 16:51:50.324623   67510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0625 16:51:50.373947   67510 crio.go:514] all images are preloaded for cri-o runtime.
	I0625 16:51:50.373960   67510 cache_images.go:84] Images are preloaded, skipping loading
	I0625 16:51:50.373968   67510 kubeadm.go:928] updating node { 192.168.83.28 8555 v1.30.2 crio true true} ...
	I0625 16:51:50.374107   67510 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-options-742979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:cert-options-742979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0625 16:51:50.374190   67510 ssh_runner.go:195] Run: crio config
	I0625 16:51:50.423983   67510 cni.go:84] Creating CNI manager for ""
	I0625 16:51:50.423991   67510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 16:51:50.423998   67510 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0625 16:51:50.424015   67510 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.28 APIServerPort:8555 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-742979 NodeName:cert-options-742979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0625 16:51:50.424147   67510 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.28
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-options-742979"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0625 16:51:50.424200   67510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0625 16:51:50.434569   67510 binaries.go:44] Found k8s binaries, skipping transfer
	I0625 16:51:50.434642   67510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0625 16:51:50.445427   67510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0625 16:51:50.466026   67510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0625 16:51:50.486011   67510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0625 16:51:50.503951   67510 ssh_runner.go:195] Run: grep 192.168.83.28	control-plane.minikube.internal$ /etc/hosts
	I0625 16:51:50.507808   67510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0625 16:51:50.519289   67510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:51:50.652497   67510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 16:51:50.669729   67510 certs.go:68] Setting up /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979 for IP: 192.168.83.28
	I0625 16:51:50.669740   67510 certs.go:194] generating shared ca certs ...
	I0625 16:51:50.669756   67510 certs.go:226] acquiring lock for ca certs: {Name:mkac904b769881cd26c50f043dc80ff92937f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:50.669933   67510 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key
	I0625 16:51:50.669978   67510 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key
	I0625 16:51:50.669986   67510 certs.go:256] generating profile certs ...
	I0625 16:51:50.670068   67510 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/client.key
	I0625 16:51:50.670080   67510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/client.crt with IP's: []
	I0625 16:51:50.869978   67510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/client.crt ...
	I0625 16:51:50.870000   67510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/client.crt: {Name:mkff20436270ca2b0a91285af2158411579c5fff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:50.870224   67510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/client.key ...
	I0625 16:51:50.870236   67510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/client.key: {Name:mk4382263218940c501d9977a4739d1f6618207e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:50.870340   67510 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.key.4a02eec9
	I0625 16:51:50.870356   67510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.crt.4a02eec9 with IP's: [127.0.0.1 192.168.15.15 10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.28]
	I0625 16:51:50.987125   67510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.crt.4a02eec9 ...
	I0625 16:51:50.987140   67510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.crt.4a02eec9: {Name:mk70bc77e5b289b08c519ae4adff17614dc76fb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:50.987289   67510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.key.4a02eec9 ...
	I0625 16:51:50.987297   67510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.key.4a02eec9: {Name:mkb5316758f637fc71996b7be8a9ac21e7bdd7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:50.987363   67510 certs.go:381] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.crt.4a02eec9 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.crt
	I0625 16:51:50.987443   67510 certs.go:385] copying /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.key.4a02eec9 -> /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.key
	I0625 16:51:50.987492   67510 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.key
	I0625 16:51:50.987502   67510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.crt with IP's: []
	I0625 16:51:51.253108   67510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.crt ...
	I0625 16:51:51.253122   67510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.crt: {Name:mk19b8724de0676ee7a75414d1c2a59129996edc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:51.253306   67510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.key ...
	I0625 16:51:51.253314   67510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.key: {Name:mkeaaf451e75efcdede17d3530fbcd76f62f634c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:51.253481   67510 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem (1338 bytes)
	W0625 16:51:51.253510   67510 certs.go:480] ignoring /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239_empty.pem, impossibly tiny 0 bytes
	I0625 16:51:51.253521   67510 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca-key.pem (1679 bytes)
	I0625 16:51:51.253540   67510 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/ca.pem (1078 bytes)
	I0625 16:51:51.253558   67510 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/cert.pem (1123 bytes)
	I0625 16:51:51.253575   67510 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/certs/key.pem (1679 bytes)
	I0625 16:51:51.253604   67510 certs.go:484] found cert: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem (1708 bytes)
	I0625 16:51:51.254205   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0625 16:51:51.286296   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0625 16:51:51.319132   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0625 16:51:51.346415   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0625 16:51:51.371116   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1480 bytes)
	I0625 16:51:51.397075   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0625 16:51:51.422408   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0625 16:51:51.446626   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/cert-options-742979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0625 16:51:51.473112   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0625 16:51:51.504432   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/certs/21239.pem --> /usr/share/ca-certificates/21239.pem (1338 bytes)
	I0625 16:51:51.546504   67510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/ssl/certs/212392.pem --> /usr/share/ca-certificates/212392.pem (1708 bytes)
	I0625 16:51:51.577124   67510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0625 16:51:51.595227   67510 ssh_runner.go:195] Run: openssl version
	I0625 16:51:51.601077   67510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0625 16:51:51.612617   67510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:51:51.617496   67510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 25 15:10 /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:51:51.617553   67510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0625 16:51:51.624030   67510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0625 16:51:51.635890   67510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21239.pem && ln -fs /usr/share/ca-certificates/21239.pem /etc/ssl/certs/21239.pem"
	I0625 16:51:51.646824   67510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21239.pem
	I0625 16:51:51.651368   67510 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 25 15:51 /usr/share/ca-certificates/21239.pem
	I0625 16:51:51.651403   67510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21239.pem
	I0625 16:51:51.656861   67510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21239.pem /etc/ssl/certs/51391683.0"
	I0625 16:51:51.667523   67510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212392.pem && ln -fs /usr/share/ca-certificates/212392.pem /etc/ssl/certs/212392.pem"
	I0625 16:51:51.678684   67510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212392.pem
	I0625 16:51:51.683363   67510 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 25 15:51 /usr/share/ca-certificates/212392.pem
	I0625 16:51:51.683403   67510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212392.pem
	I0625 16:51:51.689255   67510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/212392.pem /etc/ssl/certs/3ec20f2e.0"
	I0625 16:51:51.701410   67510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0625 16:51:51.705561   67510 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0625 16:51:51.705604   67510 kubeadm.go:391] StartCluster: {Name:cert-options-742979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:cert-options-742979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.28 Port:8555 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 16:51:51.705682   67510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0625 16:51:51.705742   67510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0625 16:51:51.757540   67510 cri.go:89] found id: ""
	I0625 16:51:51.757611   67510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0625 16:51:51.769400   67510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0625 16:51:51.780397   67510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0625 16:51:51.790663   67510 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0625 16:51:51.790672   67510 kubeadm.go:156] found existing configuration files:
	
	I0625 16:51:51.790715   67510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf
	I0625 16:51:51.800156   67510 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0625 16:51:51.800203   67510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0625 16:51:51.809987   67510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf
	I0625 16:51:51.821114   67510 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0625 16:51:51.821163   67510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0625 16:51:51.832611   67510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf
	I0625 16:51:51.843420   67510 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0625 16:51:51.843461   67510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0625 16:51:51.855025   67510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf
	I0625 16:51:51.866223   67510 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0625 16:51:51.866256   67510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0625 16:51:51.876893   67510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0625 16:51:52.012810   67510 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0625 16:51:52.013123   67510 kubeadm.go:309] [preflight] Running pre-flight checks
	I0625 16:51:52.146822   67510 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0625 16:51:52.146962   67510 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0625 16:51:52.147083   67510 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0625 16:51:52.403860   67510 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0625 16:51:48.202332   66820 pod_ready.go:102] pod "etcd-pause-756277" in "kube-system" namespace has status "Ready":"False"
	I0625 16:51:50.702553   66820 pod_ready.go:102] pod "etcd-pause-756277" in "kube-system" namespace has status "Ready":"False"
	I0625 16:51:51.201552   66820 pod_ready.go:92] pod "etcd-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:51.201575   66820 pod_ready.go:81] duration metric: took 9.006993828s for pod "etcd-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:51.201585   66820 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:51.207908   66820 pod_ready.go:92] pod "kube-apiserver-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:51.207929   66820 pod_ready.go:81] duration metric: took 6.337727ms for pod "kube-apiserver-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:51.207942   66820 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:51.715386   66820 pod_ready.go:92] pod "kube-controller-manager-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:51.715411   66820 pod_ready.go:81] duration metric: took 507.461084ms for pod "kube-controller-manager-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:51.715428   66820 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k2flf" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:51.722419   66820 pod_ready.go:92] pod "kube-proxy-k2flf" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:51.722439   66820 pod_ready.go:81] duration metric: took 7.003281ms for pod "kube-proxy-k2flf" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:51.722451   66820 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:48.848078   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:48.848727   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:48.848753   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:48.848679   68009 retry.go:31] will retry after 615.70938ms: waiting for machine to come up
	I0625 16:51:49.466514   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:49.467030   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:49.467082   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:49.466976   68009 retry.go:31] will retry after 1.098402085s: waiting for machine to come up
	I0625 16:51:50.566833   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:50.567412   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:50.567439   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:50.567364   68009 retry.go:31] will retry after 1.338001197s: waiting for machine to come up
	I0625 16:51:51.906989   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:51.907694   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:51.907722   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:51.907642   68009 retry.go:31] will retry after 1.695207109s: waiting for machine to come up
	I0625 16:51:52.532225   67510 out.go:204]   - Generating certificates and keys ...
	I0625 16:51:52.532378   67510 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0625 16:51:52.532501   67510 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0625 16:51:52.613355   67510 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0625 16:51:52.680508   67510 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0625 16:51:52.983167   67510 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0625 16:51:53.163535   67510 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0625 16:51:53.391112   67510 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0625 16:51:53.391258   67510 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [cert-options-742979 localhost] and IPs [192.168.83.28 127.0.0.1 ::1]
	I0625 16:51:53.563704   67510 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0625 16:51:53.563842   67510 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [cert-options-742979 localhost] and IPs [192.168.83.28 127.0.0.1 ::1]
	I0625 16:51:53.780439   67510 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0625 16:51:53.884239   67510 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0625 16:51:53.992173   67510 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0625 16:51:53.992278   67510 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0625 16:51:54.285955   67510 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0625 16:51:54.698504   67510 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0625 16:51:54.824516   67510 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0625 16:51:55.130931   67510 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0625 16:51:55.273440   67510 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0625 16:51:55.274408   67510 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0625 16:51:55.278236   67510 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0625 16:51:53.827154   66820 pod_ready.go:102] pod "kube-scheduler-pause-756277" in "kube-system" namespace has status "Ready":"False"
	I0625 16:51:55.729273   66820 pod_ready.go:92] pod "kube-scheduler-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:55.729299   66820 pod_ready.go:81] duration metric: took 4.006838878s for pod "kube-scheduler-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:55.729309   66820 pod_ready.go:38] duration metric: took 13.546558495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 16:51:55.729328   66820 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0625 16:51:55.744967   66820 ops.go:34] apiserver oom_adj: -16
	I0625 16:51:55.744987   66820 kubeadm.go:591] duration metric: took 40.644428463s to restartPrimaryControlPlane
	I0625 16:51:55.744998   66820 kubeadm.go:393] duration metric: took 41.036637358s to StartCluster
	I0625 16:51:55.745020   66820 settings.go:142] acquiring lock: {Name:mk38d7db80b40da56857d65b8e7da05700cdb9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:55.745098   66820 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 16:51:55.746434   66820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/kubeconfig: {Name:mk71a37176bd7deadd1f1cd3c756fe56f3b0810d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 16:51:55.746739   66820 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0625 16:51:55.746958   66820 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0625 16:51:55.747056   66820 config.go:182] Loaded profile config "pause-756277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:51:55.748448   66820 out.go:177] * Verifying Kubernetes components...
	I0625 16:51:55.749331   66820 out.go:177] * Enabled addons: 
	I0625 16:51:55.750150   66820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0625 16:51:55.750890   66820 addons.go:510] duration metric: took 3.934413ms for enable addons: enabled=[]
	I0625 16:51:55.925860   66820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0625 16:51:55.945014   66820 node_ready.go:35] waiting up to 6m0s for node "pause-756277" to be "Ready" ...
	I0625 16:51:55.948792   66820 node_ready.go:49] node "pause-756277" has status "Ready":"True"
	I0625 16:51:55.948819   66820 node_ready.go:38] duration metric: took 3.766818ms for node "pause-756277" to be "Ready" ...
	I0625 16:51:55.948831   66820 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 16:51:55.959007   66820 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jsf7r" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:55.965535   66820 pod_ready.go:92] pod "coredns-7db6d8ff4d-jsf7r" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:55.965558   66820 pod_ready.go:81] duration metric: took 6.519765ms for pod "coredns-7db6d8ff4d-jsf7r" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:55.965569   66820 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:55.999867   66820 pod_ready.go:92] pod "etcd-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:55.999887   66820 pod_ready.go:81] duration metric: took 34.312113ms for pod "etcd-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:55.999897   66820 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:56.399680   66820 pod_ready.go:92] pod "kube-apiserver-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:56.399712   66820 pod_ready.go:81] duration metric: took 399.807529ms for pod "kube-apiserver-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:56.399726   66820 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:56.801024   66820 pod_ready.go:92] pod "kube-controller-manager-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:56.801046   66820 pod_ready.go:81] duration metric: took 401.311475ms for pod "kube-controller-manager-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:56.801055   66820 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k2flf" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:57.200381   66820 pod_ready.go:92] pod "kube-proxy-k2flf" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:57.200409   66820 pod_ready.go:81] duration metric: took 399.346662ms for pod "kube-proxy-k2flf" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:57.200421   66820 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:53.605486   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:53.605994   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:53.606021   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:53.605938   68009 retry.go:31] will retry after 1.870496428s: waiting for machine to come up
	I0625 16:51:55.477847   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:55.478354   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:55.478384   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:55.478308   68009 retry.go:31] will retry after 1.914303586s: waiting for machine to come up
	I0625 16:51:57.394848   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | domain old-k8s-version-462347 has defined MAC address 52:54:00:06:95:fd in network mk-old-k8s-version-462347
	I0625 16:51:57.395374   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | unable to find current IP address of domain old-k8s-version-462347 in network mk-old-k8s-version-462347
	I0625 16:51:57.395405   67969 main.go:141] libmachine: (old-k8s-version-462347) DBG | I0625 16:51:57.395336   68009 retry.go:31] will retry after 2.696563668s: waiting for machine to come up
	I0625 16:51:57.599748   66820 pod_ready.go:92] pod "kube-scheduler-pause-756277" in "kube-system" namespace has status "Ready":"True"
	I0625 16:51:57.599778   66820 pod_ready.go:81] duration metric: took 399.348589ms for pod "kube-scheduler-pause-756277" in "kube-system" namespace to be "Ready" ...
	I0625 16:51:57.599788   66820 pod_ready.go:38] duration metric: took 1.650945213s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0625 16:51:57.599806   66820 api_server.go:52] waiting for apiserver process to appear ...
	I0625 16:51:57.599866   66820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:51:57.623304   66820 api_server.go:72] duration metric: took 1.876524409s to wait for apiserver process to appear ...
	I0625 16:51:57.623334   66820 api_server.go:88] waiting for apiserver healthz status ...
	I0625 16:51:57.623363   66820 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I0625 16:51:57.634972   66820 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I0625 16:51:57.637240   66820 api_server.go:141] control plane version: v1.30.2
	I0625 16:51:57.637265   66820 api_server.go:131] duration metric: took 13.922241ms to wait for apiserver health ...
	I0625 16:51:57.637275   66820 system_pods.go:43] waiting for kube-system pods to appear ...
	I0625 16:51:57.803247   66820 system_pods.go:59] 6 kube-system pods found
	I0625 16:51:57.803281   66820 system_pods.go:61] "coredns-7db6d8ff4d-jsf7r" [8ddacba2-d039-40c7-8731-ba8e5707cfda] Running
	I0625 16:51:57.803288   66820 system_pods.go:61] "etcd-pause-756277" [9b009204-4f10-4d01-9cc3-601cc13fcdbc] Running
	I0625 16:51:57.803294   66820 system_pods.go:61] "kube-apiserver-pause-756277" [384ff579-a83e-4186-9e58-5486ddbfc394] Running
	I0625 16:51:57.803300   66820 system_pods.go:61] "kube-controller-manager-pause-756277" [ac3c7fed-f5ca-4a8b-ae35-8e9c77f41153] Running
	I0625 16:51:57.803306   66820 system_pods.go:61] "kube-proxy-k2flf" [dc85c133-117a-4389-9f53-32d82b3e40ce] Running
	I0625 16:51:57.803312   66820 system_pods.go:61] "kube-scheduler-pause-756277" [39879154-54fd-4458-a274-228563ba7f39] Running
	I0625 16:51:57.803320   66820 system_pods.go:74] duration metric: took 166.02258ms to wait for pod list to return data ...
	I0625 16:51:57.803336   66820 default_sa.go:34] waiting for default service account to be created ...
	I0625 16:51:57.999810   66820 default_sa.go:45] found service account: "default"
	I0625 16:51:57.999837   66820 default_sa.go:55] duration metric: took 196.493717ms for default service account to be created ...
	I0625 16:51:57.999847   66820 system_pods.go:116] waiting for k8s-apps to be running ...
	I0625 16:51:58.201729   66820 system_pods.go:86] 6 kube-system pods found
	I0625 16:51:58.201759   66820 system_pods.go:89] "coredns-7db6d8ff4d-jsf7r" [8ddacba2-d039-40c7-8731-ba8e5707cfda] Running
	I0625 16:51:58.201764   66820 system_pods.go:89] "etcd-pause-756277" [9b009204-4f10-4d01-9cc3-601cc13fcdbc] Running
	I0625 16:51:58.201768   66820 system_pods.go:89] "kube-apiserver-pause-756277" [384ff579-a83e-4186-9e58-5486ddbfc394] Running
	I0625 16:51:58.201774   66820 system_pods.go:89] "kube-controller-manager-pause-756277" [ac3c7fed-f5ca-4a8b-ae35-8e9c77f41153] Running
	I0625 16:51:58.201780   66820 system_pods.go:89] "kube-proxy-k2flf" [dc85c133-117a-4389-9f53-32d82b3e40ce] Running
	I0625 16:51:58.201784   66820 system_pods.go:89] "kube-scheduler-pause-756277" [39879154-54fd-4458-a274-228563ba7f39] Running
	I0625 16:51:58.201790   66820 system_pods.go:126] duration metric: took 201.937614ms to wait for k8s-apps to be running ...
	I0625 16:51:58.201797   66820 system_svc.go:44] waiting for kubelet service to be running ....
	I0625 16:51:58.201838   66820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:51:58.217980   66820 system_svc.go:56] duration metric: took 16.174027ms WaitForService to wait for kubelet
	I0625 16:51:58.218008   66820 kubeadm.go:576] duration metric: took 2.471232731s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0625 16:51:58.218032   66820 node_conditions.go:102] verifying NodePressure condition ...
	I0625 16:51:58.400089   66820 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0625 16:51:58.400111   66820 node_conditions.go:123] node cpu capacity is 2
	I0625 16:51:58.400122   66820 node_conditions.go:105] duration metric: took 182.084463ms to run NodePressure ...
	I0625 16:51:58.400132   66820 start.go:240] waiting for startup goroutines ...
	I0625 16:51:58.400139   66820 start.go:245] waiting for cluster config update ...
	I0625 16:51:58.400146   66820 start.go:254] writing updated cluster config ...
	I0625 16:51:58.400413   66820 ssh_runner.go:195] Run: rm -f paused
	I0625 16:51:58.449800   66820 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0625 16:51:58.451685   66820 out.go:177] * Done! kubectl is now configured to use "pause-756277" cluster and "default" namespace by default
	I0625 16:51:55.279954   67510 out.go:204]   - Booting up control plane ...
	I0625 16:51:55.280101   67510 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0625 16:51:55.282041   67510 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0625 16:51:55.284511   67510 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0625 16:51:55.313185   67510 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0625 16:51:55.314427   67510 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0625 16:51:55.314520   67510 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0625 16:51:55.473382   67510 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0625 16:51:55.473504   67510 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0625 16:51:55.974487   67510 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.297157ms
	I0625 16:51:55.974614   67510 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	
	
	==> CRI-O <==
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.270190765Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e442bb46-77b4-491c-8a03-2eef1883956c name=/runtime.v1.RuntimeService/Version
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.271237334Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b53209d0-15d9-4aab-8175-3bd79dff98c3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.271572359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719334321271553095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b53209d0-15d9-4aab-8175-3bd79dff98c3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.272390914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8169a2c-21e1-4483-a311-d180b31e5cf9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.272609330Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8169a2c-21e1-4483-a311-d180b31e5cf9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.274270284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1740598d5cf3e4ded267f89d2b1ce627811652faa5611ed1c54811aac4799b56,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719334297791062960,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c3648d140888f573d7d76f44a4a9b301678446e57e7de9d648a15ef0e6477,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719334297789493345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815bfb4144a979c0d75febf219054e44f58be9cef61ae0e89f475aacac6d1797,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719334297769722348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ee0a5a949a621f4b6dd6e3cddbb8da84c18c31f77678f5817f92c691f8c04a,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719334297762963682,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c64330d1f127ae90ecb45dc9fabcf2d04ee5c8ae6a8b906780deed57a4be43,PodSandboxId:51a664b19a7e20f1a04d601db6be55a6177b9becfe69785a6d151549a4dd066e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334275418825022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0822288d9ccfea7e829bb4c8c1ccbf4837614f5f2882d194b04550215bcf0d5,PodSandboxId:e6eca69bb88511a7765e98f9a0e1f9c8fb738497f34fa78aa51c9076d83fc375,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719334274576543392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.
kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d61afd2cc4c8bd0458bb500b4ed4a32ae4210ac11f14960978760413d53aae9,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719334274476386447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc4673099e5d0c171adb781d9b2890366b76aebd88506fdcd169c982796c793,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719334274489565438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719334274422406316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719334274299312900,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4711285f965e8c05454daca7fcdcc495b4cdb478f1da0464bbf229ee779c5f2a,PodSandboxId:f0857f728320f8fae5c8573045909064552e84eb979cb14a6161cde4254448d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334187028383448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6fba0c0a9f02a1736519655e7546883e15c7aad2270f2c098353e2a7a73987,PodSandboxId:c0db4dbb08fbc0e070a59277c742bfda588f8402f9632600d8a72e1ffecabb90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719334186354943404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a8169a2c-21e1-4483-a311-d180b31e5cf9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.320940527Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=827ce907-b9f8-4049-a79b-fb377d4725d8 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.321011597Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=827ce907-b9f8-4049-a79b-fb377d4725d8 name=/runtime.v1.RuntimeService/Version
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.321777695Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c026475e-63c7-47bf-9299-f522719d07e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.322427643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719334321322402264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c026475e-63c7-47bf-9299-f522719d07e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.322905087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e73c8ef9-04c9-4b27-930c-7290451edab0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.322978331Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e73c8ef9-04c9-4b27-930c-7290451edab0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.323398825Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1740598d5cf3e4ded267f89d2b1ce627811652faa5611ed1c54811aac4799b56,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719334297791062960,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c3648d140888f573d7d76f44a4a9b301678446e57e7de9d648a15ef0e6477,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719334297789493345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815bfb4144a979c0d75febf219054e44f58be9cef61ae0e89f475aacac6d1797,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719334297769722348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ee0a5a949a621f4b6dd6e3cddbb8da84c18c31f77678f5817f92c691f8c04a,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719334297762963682,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c64330d1f127ae90ecb45dc9fabcf2d04ee5c8ae6a8b906780deed57a4be43,PodSandboxId:51a664b19a7e20f1a04d601db6be55a6177b9becfe69785a6d151549a4dd066e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334275418825022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0822288d9ccfea7e829bb4c8c1ccbf4837614f5f2882d194b04550215bcf0d5,PodSandboxId:e6eca69bb88511a7765e98f9a0e1f9c8fb738497f34fa78aa51c9076d83fc375,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719334274576543392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.
kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d61afd2cc4c8bd0458bb500b4ed4a32ae4210ac11f14960978760413d53aae9,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719334274476386447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc4673099e5d0c171adb781d9b2890366b76aebd88506fdcd169c982796c793,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719334274489565438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719334274422406316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719334274299312900,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4711285f965e8c05454daca7fcdcc495b4cdb478f1da0464bbf229ee779c5f2a,PodSandboxId:f0857f728320f8fae5c8573045909064552e84eb979cb14a6161cde4254448d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334187028383448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6fba0c0a9f02a1736519655e7546883e15c7aad2270f2c098353e2a7a73987,PodSandboxId:c0db4dbb08fbc0e070a59277c742bfda588f8402f9632600d8a72e1ffecabb90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719334186354943404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e73c8ef9-04c9-4b27-930c-7290451edab0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.334274573Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=117d1a81-9cbd-43a2-9ed5-3a5c7f81b08a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.334441175Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:51a664b19a7e20f1a04d601db6be55a6177b9becfe69785a6d151549a4dd066e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jsf7r,Uid:8ddacba2-d039-40c7-8731-ba8e5707cfda,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1719334274201793963,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-25T16:49:45.796609101Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e6eca69bb88511a7765e98f9a0e1f9c8fb738497f34fa78aa51c9076d83fc375,Metadata:&PodSandboxMetadata{Name:kube-proxy-k2flf,Uid:dc85c133-117a-4389-9f53-32d82b3e40ce,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1719334274074652590,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-25T16:49:45.672552569Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-756277,Uid:37c7640e520f03ea58f37be53c5e026b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1719334274016804319,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: 37c7640e520f03ea58f37be53c5e026b,kubernetes.io/config.seen: 2024-06-25T16:49:32.358994522Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-756277,Uid:8e062d122403f7f365a5b63f47c778e5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1719334273981446600,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.163:8443,kubernetes.io/config.hash: 8e062d122403f7f365a5b63f47c778e5,kubernetes.io/config.seen: 2024-06-25T16:49:32.358990692Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox
{Id:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&PodSandboxMetadata{Name:etcd-pause-756277,Uid:d6c158921e3bdda78ecc0ea9d20447d8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1719334273930665313,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.163:2379,kubernetes.io/config.hash: d6c158921e3bdda78ecc0ea9d20447d8,kubernetes.io/config.seen: 2024-06-25T16:49:32.358997097Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-756277,Uid:7331ec8d20fdfec021ddab1d2b2e4438,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1719334273926320204,Lab
els:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7331ec8d20fdfec021ddab1d2b2e4438,kubernetes.io/config.seen: 2024-06-25T16:49:32.358995958Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=117d1a81-9cbd-43a2-9ed5-3a5c7f81b08a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.334945306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59ceca2b-1356-4ef8-aea6-4416ae005dcb name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.335021208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59ceca2b-1356-4ef8-aea6-4416ae005dcb name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.335485640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1740598d5cf3e4ded267f89d2b1ce627811652faa5611ed1c54811aac4799b56,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719334297791062960,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c3648d140888f573d7d76f44a4a9b301678446e57e7de9d648a15ef0e6477,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719334297789493345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815bfb4144a979c0d75febf219054e44f58be9cef61ae0e89f475aacac6d1797,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719334297769722348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ee0a5a949a621f4b6dd6e3cddbb8da84c18c31f77678f5817f92c691f8c04a,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719334297762963682,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c64330d1f127ae90ecb45dc9fabcf2d04ee5c8ae6a8b906780deed57a4be43,PodSandboxId:51a664b19a7e20f1a04d601db6be55a6177b9becfe69785a6d151549a4dd066e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334275418825022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0822288d9ccfea7e829bb4c8c1ccbf4837614f5f2882d194b04550215bcf0d5,PodSandboxId:e6eca69bb88511a7765e98f9a0e1f9c8fb738497f34fa78aa51c9076d83fc375,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719334274576543392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.
kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59ceca2b-1356-4ef8-aea6-4416ae005dcb name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.367785897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a6af71e-0a76-47e0-8543-ff530be8f71f name=/runtime.v1.RuntimeService/Version
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.367876426Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a6af71e-0a76-47e0-8543-ff530be8f71f name=/runtime.v1.RuntimeService/Version
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.370652605Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67dd1741-52ad-46f1-ab27-88942256cd24 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.371887034Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1719334321371803271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67dd1741-52ad-46f1-ab27-88942256cd24 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.373427141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c56b15b-46c4-4d6f-90b8-d48509b3af62 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.373578731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c56b15b-46c4-4d6f-90b8-d48509b3af62 name=/runtime.v1.RuntimeService/ListContainers
	Jun 25 16:52:01 pause-756277 crio[2478]: time="2024-06-25 16:52:01.374965434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1740598d5cf3e4ded267f89d2b1ce627811652faa5611ed1c54811aac4799b56,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1719334297791062960,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c3648d140888f573d7d76f44a4a9b301678446e57e7de9d648a15ef0e6477,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1719334297789493345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815bfb4144a979c0d75febf219054e44f58be9cef61ae0e89f475aacac6d1797,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1719334297769722348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ee0a5a949a621f4b6dd6e3cddbb8da84c18c31f77678f5817f92c691f8c04a,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1719334297762963682,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c64330d1f127ae90ecb45dc9fabcf2d04ee5c8ae6a8b906780deed57a4be43,PodSandboxId:51a664b19a7e20f1a04d601db6be55a6177b9becfe69785a6d151549a4dd066e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1719334275418825022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0822288d9ccfea7e829bb4c8c1ccbf4837614f5f2882d194b04550215bcf0d5,PodSandboxId:e6eca69bb88511a7765e98f9a0e1f9c8fb738497f34fa78aa51c9076d83fc375,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1719334274576543392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.
kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d61afd2cc4c8bd0458bb500b4ed4a32ae4210ac11f14960978760413d53aae9,PodSandboxId:c1ab1bd9d99dd5bac46c1f9bc67bddb946d666f2c9f983e74e08cd42b4631c89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1719334274476386447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c158921e3bdda78ecc0ea9d20447d8,},Annotations:map[string]string{io.kubernetes.container.hash: 33772c37,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc4673099e5d0c171adb781d9b2890366b76aebd88506fdcd169c982796c793,PodSandboxId:88987c63b11430baa6c477d4f03c2fb16ce0f8cc82e05e1773300de3b3798151,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1719334274489565438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7640e520f03ea58f37be53c5e026b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c,PodSandboxId:75ba72120d9946873b4404d468c1f50b920c84806daf2661a3bb6e1066e8bd3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1719334274422406316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7331ec8d20fdfec021ddab1d2b2e4438,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e,PodSandboxId:27bccef8ab2f4cf098b444d320edd34cb819daa5663a5a9f6fb45f718a30ed71,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1719334274299312900,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-756277,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e062d122403f7f365a5b63f47c778e5,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6bf3ba,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4711285f965e8c05454daca7fcdcc495b4cdb478f1da0464bbf229ee779c5f2a,PodSandboxId:f0857f728320f8fae5c8573045909064552e84eb979cb14a6161cde4254448d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1719334187028383448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jsf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddacba2-d039-40c7-8731-ba8e5707cfda,},Annotations:map[string]string{io.kubernetes.container.hash: df16ca51,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6fba0c0a9f02a1736519655e7546883e15c7aad2270f2c098353e2a7a73987,PodSandboxId:c0db4dbb08fbc0e070a59277c742bfda588f8402f9632600d8a72e1ffecabb90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1719334186354943404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2flf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: dc85c133-117a-4389-9f53-32d82b3e40ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed0814a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c56b15b-46c4-4d6f-90b8-d48509b3af62 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1740598d5cf3e       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   23 seconds ago      Running             kube-scheduler            2                   75ba72120d994       kube-scheduler-pause-756277
	a25c3648d1408       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   23 seconds ago      Running             kube-controller-manager   2                   88987c63b1143       kube-controller-manager-pause-756277
	815bfb4144a97       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   23 seconds ago      Running             kube-apiserver            2                   27bccef8ab2f4       kube-apiserver-pause-756277
	a2ee0a5a949a6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago      Running             etcd                      2                   c1ab1bd9d99dd       etcd-pause-756277
	35c64330d1f12       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   46 seconds ago      Running             coredns                   1                   51a664b19a7e2       coredns-7db6d8ff4d-jsf7r
	f0822288d9ccf       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   46 seconds ago      Running             kube-proxy                1                   e6eca69bb8851       kube-proxy-k2flf
	afc4673099e5d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   46 seconds ago      Exited              kube-controller-manager   1                   88987c63b1143       kube-controller-manager-pause-756277
	5d61afd2cc4c8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   46 seconds ago      Exited              etcd                      1                   c1ab1bd9d99dd       etcd-pause-756277
	edec890ad5331       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   47 seconds ago      Exited              kube-scheduler            1                   75ba72120d994       kube-scheduler-pause-756277
	a43a1e8fd4e02       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   47 seconds ago      Exited              kube-apiserver            1                   27bccef8ab2f4       kube-apiserver-pause-756277
	4711285f965e8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 minutes ago       Exited              coredns                   0                   f0857f728320f       coredns-7db6d8ff4d-jsf7r
	7f6fba0c0a9f0       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   2 minutes ago       Exited              kube-proxy                0                   c0db4dbb08fbc       kube-proxy-k2flf
	
	
	==> coredns [35c64330d1f127ae90ecb45dc9fabcf2d04ee5c8ae6a8b906780deed57a4be43] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59343 - 47997 "HINFO IN 3079080349174427710.3312084374361514834. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021163949s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[876184842]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:51:15.765) (total time: 10005ms):
	Trace[876184842]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10004ms (16:51:25.770)
	Trace[876184842]: [10.005096053s] [10.005096053s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[880905023]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:51:15.769) (total time: 10001ms):
	Trace[880905023]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:51:25.771)
	Trace[880905023]: [10.001544802s] [10.001544802s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1493421118]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:51:15.769) (total time: 10001ms):
	Trace[1493421118]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:51:25.771)
	Trace[1493421118]: [10.001726096s] [10.001726096s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:46510->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:46510->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:46512->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:46512->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:46494->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:46494->10.96.0.1:443: read: connection reset by peer
	
	
	==> coredns [4711285f965e8c05454daca7fcdcc495b4cdb478f1da0464bbf229ee779c5f2a] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1480068827]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:49:47.488) (total time: 30000ms):
	Trace[1480068827]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (16:50:17.489)
	Trace[1480068827]: [30.000846366s] [30.000846366s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1828440197]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:49:47.487) (total time: 30002ms):
	Trace[1828440197]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (16:50:17.488)
	Trace[1828440197]: [30.002450742s] [30.002450742s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1960613028]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jun-2024 16:49:47.489) (total time: 30001ms):
	Trace[1960613028]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (16:50:17.489)
	Trace[1960613028]: [30.00104752s] [30.00104752s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37865 - 39199 "HINFO IN 1957842443361674784.2437041974684276081. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022665892s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-756277
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-756277
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fab7022a852c126b6362130df6080ba16ab6375b
	                    minikube.k8s.io/name=pause-756277
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_25T16_49_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Jun 2024 16:49:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-756277
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Jun 2024 16:52:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Jun 2024 16:51:40 +0000   Tue, 25 Jun 2024 16:49:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Jun 2024 16:51:40 +0000   Tue, 25 Jun 2024 16:49:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Jun 2024 16:51:40 +0000   Tue, 25 Jun 2024 16:49:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Jun 2024 16:51:40 +0000   Tue, 25 Jun 2024 16:49:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.163
	  Hostname:    pause-756277
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 eeb1476641764122aa8042096faae27a
	  System UUID:                eeb14766-4176-4122-aa80-42096faae27a
	  Boot ID:                    d9ea9705-1b7e-4cc2-ac62-f45f33132579
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jsf7r                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m16s
	  kube-system                 etcd-pause-756277                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m29s
	  kube-system                 kube-apiserver-pause-756277             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 kube-controller-manager-pause-756277    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 kube-proxy-k2flf                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 kube-scheduler-pause-756277             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m14s              kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeHasSufficientPID     2m29s              kubelet          Node pause-756277 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m29s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m29s              kubelet          Node pause-756277 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m29s              kubelet          Node pause-756277 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m29s              kubelet          Starting kubelet.
	  Normal  NodeReady                2m28s              kubelet          Node pause-756277 status is now: NodeReady
	  Normal  RegisteredNode           2m16s              node-controller  Node pause-756277 event: Registered Node pause-756277 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-756277 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-756277 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-756277 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node pause-756277 event: Registered Node pause-756277 in Controller
	
	
	==> dmesg <==
	[  +0.084322] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.199298] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.146435] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.316284] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.802188] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.074949] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.619908] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.611416] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.489347] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.115730] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.941007] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.015490] systemd-fstab-generator[1538]: Ignoring "noauto" option for root device
	[ +11.446316] kauditd_printk_skb: 88 callbacks suppressed
	[Jun25 16:51] systemd-fstab-generator[2399]: Ignoring "noauto" option for root device
	[  +0.188722] systemd-fstab-generator[2411]: Ignoring "noauto" option for root device
	[  +0.198333] systemd-fstab-generator[2425]: Ignoring "noauto" option for root device
	[  +0.149886] systemd-fstab-generator[2437]: Ignoring "noauto" option for root device
	[  +0.307005] systemd-fstab-generator[2465]: Ignoring "noauto" option for root device
	[  +7.671887] systemd-fstab-generator[2592]: Ignoring "noauto" option for root device
	[  +0.125347] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.710790] kauditd_printk_skb: 87 callbacks suppressed
	[ +10.846288] systemd-fstab-generator[3370]: Ignoring "noauto" option for root device
	[  +0.812273] kauditd_printk_skb: 17 callbacks suppressed
	[ +15.974902] kauditd_printk_skb: 12 callbacks suppressed
	[  +2.009832] systemd-fstab-generator[3693]: Ignoring "noauto" option for root device
	
	
	==> etcd [5d61afd2cc4c8bd0458bb500b4ed4a32ae4210ac11f14960978760413d53aae9] <==
	{"level":"warn","ts":"2024-06-25T16:51:15.312061Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-25T16:51:15.31439Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.50.163:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.50.163:2380","--initial-cluster=pause-756277=https://192.168.50.163:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.50.163:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.50.163:2380","--name=pause-756277","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trust
ed-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-06-25T16:51:15.31454Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-06-25T16:51:15.3146Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-25T16:51:15.314639Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.50.163:2380"]}
	{"level":"info","ts":"2024-06-25T16:51:15.314705Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-25T16:51:15.316304Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.163:2379"]}
	{"level":"info","ts":"2024-06-25T16:51:15.317307Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-756277","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.50.163:2380"],"listen-peer-urls":["https://192.168.50.163:2380"],"advertise-client-urls":["https://192.168.50.163:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.163:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cl
uster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-06-25T16:51:15.341682Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"23.127279ms"}
	{"level":"info","ts":"2024-06-25T16:51:15.382847Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-06-25T16:51:15.406736Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"c04ffccd875dba59","local-member-id":"7851e28efa6aae4","commit-index":444}
	{"level":"info","ts":"2024-06-25T16:51:15.408306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 switched to configuration voters=()"}
	{"level":"info","ts":"2024-06-25T16:51:15.408362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 became follower at term 2"}
	{"level":"info","ts":"2024-06-25T16:51:15.408387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 7851e28efa6aae4 [peers: [], term: 2, commit: 444, applied: 0, lastindex: 444, lastterm: 2]"}
	{"level":"warn","ts":"2024-06-25T16:51:15.414303Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-06-25T16:51:15.483124Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":417}
	{"level":"info","ts":"2024-06-25T16:51:15.491233Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	
	
	==> etcd [a2ee0a5a949a621f4b6dd6e3cddbb8da84c18c31f77678f5817f92c691f8c04a] <==
	{"level":"info","ts":"2024-06-25T16:51:39.130264Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-25T16:51:39.130479Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-06-25T16:51:52.345052Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.712961ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12314125955741386215 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.163\" mod_revision:428 > success:<request_put:<key:\"/registry/masterleases/192.168.50.163\" value_size:67 lease:3090753918886610405 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.163\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-25T16:51:52.345428Z","caller":"traceutil/trace.go:171","msg":"trace[1228458270] linearizableReadLoop","detail":"{readStateIndex:524; appliedIndex:523; }","duration":"130.507545ms","start":"2024-06-25T16:51:52.214895Z","end":"2024-06-25T16:51:52.345403Z","steps":["trace[1228458270] 'read index received'  (duration: 29.026µs)","trace[1228458270] 'applied index is now lower than readState.Index'  (duration: 130.468755ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-25T16:51:52.345536Z","caller":"traceutil/trace.go:171","msg":"trace[690182632] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"257.418808ms","start":"2024-06-25T16:51:52.088089Z","end":"2024-06-25T16:51:52.345508Z","steps":["trace[690182632] 'process raft request'  (duration: 125.668909ms)","trace[690182632] 'compare'  (duration: 130.587509ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-25T16:51:52.345876Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.966469ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-pause-756277\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2024-06-25T16:51:52.345943Z","caller":"traceutil/trace.go:171","msg":"trace[743665062] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-pause-756277; range_end:; response_count:1; response_revision:477; }","duration":"131.0677ms","start":"2024-06-25T16:51:52.214865Z","end":"2024-06-25T16:51:52.345933Z","steps":["trace[743665062] 'agreement among raft nodes before linearized reading'  (duration: 130.65142ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:51:53.790078Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.366263ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12314125955741386272 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-sw7bk\" mod_revision:404 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-sw7bk\" value_size:1239 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-sw7bk\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-25T16:51:53.790315Z","caller":"traceutil/trace.go:171","msg":"trace[352695321] linearizableReadLoop","detail":"{readStateIndex:526; appliedIndex:525; }","duration":"224.304871ms","start":"2024-06-25T16:51:53.565969Z","end":"2024-06-25T16:51:53.790273Z","steps":["trace[352695321] 'read index received'  (duration: 29.480602ms)","trace[352695321] 'applied index is now lower than readState.Index'  (duration: 194.82311ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-25T16:51:53.790352Z","caller":"traceutil/trace.go:171","msg":"trace[1732289185] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"255.868301ms","start":"2024-06-25T16:51:53.534469Z","end":"2024-06-25T16:51:53.790338Z","steps":["trace[1732289185] 'process raft request'  (duration: 127.102947ms)","trace[1732289185] 'compare'  (duration: 128.22105ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-25T16:51:53.791787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.747062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-06-25T16:51:53.790426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.475926ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" ","response":"range_response_count:1 size:238"}
	{"level":"info","ts":"2024-06-25T16:51:53.792908Z","caller":"traceutil/trace.go:171","msg":"trace[826437060] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner; range_end:; response_count:1; response_revision:479; }","duration":"226.976302ms","start":"2024-06-25T16:51:53.565914Z","end":"2024-06-25T16:51:53.79289Z","steps":["trace[826437060] 'agreement among raft nodes before linearized reading'  (duration: 224.462008ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:51:53.793354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.490743ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" ","response":"range_response_count:1 size:203"}
	{"level":"info","ts":"2024-06-25T16:51:53.793474Z","caller":"traceutil/trace.go:171","msg":"trace[918250213] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:479; }","duration":"176.637538ms","start":"2024-06-25T16:51:53.616826Z","end":"2024-06-25T16:51:53.793463Z","steps":["trace[918250213] 'agreement among raft nodes before linearized reading'  (duration: 176.480647ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-25T16:51:53.79382Z","caller":"traceutil/trace.go:171","msg":"trace[1221698979] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:479; }","duration":"187.79978ms","start":"2024-06-25T16:51:53.606004Z","end":"2024-06-25T16:51:53.793804Z","steps":["trace[1221698979] 'agreement among raft nodes before linearized reading'  (duration: 185.746777ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:51:54.210104Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.045753ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12314125955741386280 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:429 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-25T16:51:54.21043Z","caller":"traceutil/trace.go:171","msg":"trace[859567794] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"387.087462ms","start":"2024-06-25T16:51:53.823329Z","end":"2024-06-25T16:51:54.210416Z","steps":["trace[859567794] 'process raft request'  (duration: 387.020408ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:51:54.212506Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-25T16:51:53.823317Z","time spent":"389.088956ms","remote":"127.0.0.1:49632","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:405 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2024-06-25T16:51:54.210582Z","caller":"traceutil/trace.go:171","msg":"trace[1854482293] linearizableReadLoop","detail":"{readStateIndex:527; appliedIndex:526; }","duration":"390.224942ms","start":"2024-06-25T16:51:53.820344Z","end":"2024-06-25T16:51:54.210569Z","steps":["trace[1854482293] 'read index received'  (duration: 235.648967ms)","trace[1854482293] 'applied index is now lower than readState.Index'  (duration: 154.574857ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-25T16:51:54.211014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"390.65131ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" ","response":"range_response_count:1 size:370"}
	{"level":"info","ts":"2024-06-25T16:51:54.21272Z","caller":"traceutil/trace.go:171","msg":"trace[112953161] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; response_count:1; response_revision:481; }","duration":"392.3807ms","start":"2024-06-25T16:51:53.820325Z","end":"2024-06-25T16:51:54.212706Z","steps":["trace[112953161] 'agreement among raft nodes before linearized reading'  (duration: 390.280541ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-25T16:51:54.212752Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-25T16:51:53.820317Z","time spent":"392.424293ms","remote":"127.0.0.1:49572","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":1,"response size":393,"request content":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" "}
	{"level":"info","ts":"2024-06-25T16:51:54.211053Z","caller":"traceutil/trace.go:171","msg":"trace[1500689346] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"393.187669ms","start":"2024-06-25T16:51:53.817853Z","end":"2024-06-25T16:51:54.211041Z","steps":["trace[1500689346] 'process raft request'  (duration: 238.12993ms)","trace[1500689346] 'compare'  (duration: 153.955522ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-25T16:51:54.212929Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-25T16:51:53.81784Z","time spent":"395.053146ms","remote":"127.0.0.1:49930","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:429 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	
	
	==> kernel <==
	 16:52:01 up 3 min,  0 users,  load average: 0.82, 0.34, 0.13
	Linux pause-756277 5.10.207 #1 SMP Mon Jun 24 21:03:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [815bfb4144a979c0d75febf219054e44f58be9cef61ae0e89f475aacac6d1797] <==
	I0625 16:51:40.628215       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0625 16:51:40.700844       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0625 16:51:40.700892       1 policy_source.go:224] refreshing policies
	I0625 16:51:40.724918       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0625 16:51:40.728331       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0625 16:51:40.735695       1 aggregator.go:165] initial CRD sync complete...
	I0625 16:51:40.735741       1 autoregister_controller.go:141] Starting autoregister controller
	I0625 16:51:40.735748       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0625 16:51:40.735762       1 cache.go:39] Caches are synced for autoregister controller
	I0625 16:51:40.768437       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0625 16:51:40.768496       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0625 16:51:40.768503       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0625 16:51:40.776434       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0625 16:51:40.791055       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0625 16:51:40.791252       1 shared_informer.go:320] Caches are synced for configmaps
	I0625 16:51:40.795680       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0625 16:51:40.819914       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0625 16:51:41.568634       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0625 16:51:42.002764       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0625 16:51:42.023425       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0625 16:51:42.061569       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0625 16:51:42.099754       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0625 16:51:42.112974       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0625 16:51:53.533235       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0625 16:51:53.822708       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e] <==
	I0625 16:51:14.944075       1 server.go:148] Version: v1.30.2
	I0625 16:51:14.944322       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0625 16:51:16.051826       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:16.051926       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0625 16:51:16.053340       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0625 16:51:16.081356       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0625 16:51:16.081397       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0625 16:51:16.081603       1 instance.go:299] Using reconciler: lease
	W0625 16:51:16.082828       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0625 16:51:16.082927       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0625 16:51:17.053043       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:17.053208       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:17.084386       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:18.473622       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:18.723460       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:18.799674       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:20.847553       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:21.242534       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:21.782952       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:24.838671       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:25.691591       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:26.562842       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:31.451796       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:33.056927       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0625 16:51:33.740326       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a25c3648d140888f573d7d76f44a4a9b301678446e57e7de9d648a15ef0e6477] <==
	I0625 16:51:53.522383       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0625 16:51:53.525234       1 shared_informer.go:320] Caches are synced for taint
	I0625 16:51:53.525469       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0625 16:51:53.525662       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-756277"
	I0625 16:51:53.525789       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0625 16:51:53.528333       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0625 16:51:53.530233       1 shared_informer.go:320] Caches are synced for endpoint
	I0625 16:51:53.540308       1 shared_informer.go:320] Caches are synced for service account
	I0625 16:51:53.548622       1 shared_informer.go:320] Caches are synced for daemon sets
	I0625 16:51:53.556488       1 shared_informer.go:320] Caches are synced for crt configmap
	I0625 16:51:53.560836       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0625 16:51:53.560875       1 shared_informer.go:320] Caches are synced for attach detach
	I0625 16:51:53.564591       1 shared_informer.go:320] Caches are synced for expand
	I0625 16:51:53.567037       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0625 16:51:53.573465       1 shared_informer.go:320] Caches are synced for HPA
	I0625 16:51:53.639804       1 shared_informer.go:320] Caches are synced for disruption
	I0625 16:51:53.686222       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0625 16:51:53.686533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.018µs"
	I0625 16:51:53.723325       1 shared_informer.go:320] Caches are synced for deployment
	I0625 16:51:53.725324       1 shared_informer.go:320] Caches are synced for resource quota
	I0625 16:51:53.736125       1 shared_informer.go:320] Caches are synced for cronjob
	I0625 16:51:53.736458       1 shared_informer.go:320] Caches are synced for resource quota
	I0625 16:51:54.192782       1 shared_informer.go:320] Caches are synced for garbage collector
	I0625 16:51:54.220201       1 shared_informer.go:320] Caches are synced for garbage collector
	I0625 16:51:54.220306       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [afc4673099e5d0c171adb781d9b2890366b76aebd88506fdcd169c982796c793] <==
	
	
	==> kube-proxy [7f6fba0c0a9f02a1736519655e7546883e15c7aad2270f2c098353e2a7a73987] <==
	I0625 16:49:46.872576       1 server_linux.go:69] "Using iptables proxy"
	I0625 16:49:46.979135       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.163"]
	I0625 16:49:47.139932       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0625 16:49:47.139974       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0625 16:49:47.139990       1 server_linux.go:165] "Using iptables Proxier"
	I0625 16:49:47.150208       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0625 16:49:47.153246       1 server.go:872] "Version info" version="v1.30.2"
	I0625 16:49:47.153265       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:49:47.162417       1 config.go:192] "Starting service config controller"
	I0625 16:49:47.171935       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0625 16:49:47.177328       1 config.go:101] "Starting endpoint slice config controller"
	I0625 16:49:47.177371       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0625 16:49:47.199098       1 config.go:319] "Starting node config controller"
	I0625 16:49:47.199371       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0625 16:49:47.277662       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0625 16:49:47.278369       1 shared_informer.go:320] Caches are synced for service config
	I0625 16:49:47.300999       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f0822288d9ccfea7e829bb4c8c1ccbf4837614f5f2882d194b04550215bcf0d5] <==
	I0625 16:51:16.000656       1 server_linux.go:69] "Using iptables proxy"
	E0625 16:51:26.005695       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-756277\": net/http: TLS handshake timeout"
	E0625 16:51:36.752829       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-756277\": dial tcp 192.168.50.163:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.50.163:43008->192.168.50.163:8443: read: connection reset by peer"
	I0625 16:51:40.745779       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.163"]
	I0625 16:51:40.855039       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0625 16:51:40.855219       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0625 16:51:40.855268       1 server_linux.go:165] "Using iptables Proxier"
	I0625 16:51:40.861284       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0625 16:51:40.861501       1 server.go:872] "Version info" version="v1.30.2"
	I0625 16:51:40.861511       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:51:40.863643       1 config.go:192] "Starting service config controller"
	I0625 16:51:40.863676       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0625 16:51:40.863699       1 config.go:101] "Starting endpoint slice config controller"
	I0625 16:51:40.863703       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0625 16:51:40.864241       1 config.go:319] "Starting node config controller"
	I0625 16:51:40.864267       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0625 16:51:40.964502       1 shared_informer.go:320] Caches are synced for node config
	I0625 16:51:40.964553       1 shared_informer.go:320] Caches are synced for service config
	I0625 16:51:40.964574       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1740598d5cf3e4ded267f89d2b1ce627811652faa5611ed1c54811aac4799b56] <==
	I0625 16:51:38.975649       1 serving.go:380] Generated self-signed cert in-memory
	W0625 16:51:40.679766       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0625 16:51:40.679945       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0625 16:51:40.679965       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0625 16:51:40.680074       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0625 16:51:40.746921       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0625 16:51:40.747084       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0625 16:51:40.752730       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0625 16:51:40.753365       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0625 16:51:40.753382       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0625 16:51:40.763542       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0625 16:51:40.864537       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c] <==
	I0625 16:51:16.281198       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.528806    3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37c7640e520f03ea58f37be53c5e026b-k8s-certs\") pod \"kube-controller-manager-pause-756277\" (UID: \"37c7640e520f03ea58f37be53c5e026b\") " pod="kube-system/kube-controller-manager-pause-756277"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.528844    3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37c7640e520f03ea58f37be53c5e026b-kubeconfig\") pod \"kube-controller-manager-pause-756277\" (UID: \"37c7640e520f03ea58f37be53c5e026b\") " pod="kube-system/kube-controller-manager-pause-756277"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.528873    3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7331ec8d20fdfec021ddab1d2b2e4438-kubeconfig\") pod \"kube-scheduler-pause-756277\" (UID: \"7331ec8d20fdfec021ddab1d2b2e4438\") " pod="kube-system/kube-scheduler-pause-756277"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.589909    3377 kubelet_node_status.go:73] "Attempting to register node" node="pause-756277"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: E0625 16:51:37.590917    3377 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.163:8443: connect: connection refused" node="pause-756277"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.737119    3377 scope.go:117] "RemoveContainer" containerID="5d61afd2cc4c8bd0458bb500b4ed4a32ae4210ac11f14960978760413d53aae9"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.746315    3377 scope.go:117] "RemoveContainer" containerID="a43a1e8fd4e02cc25bcade220c757c0d4c7e0c5ef687525fa7058aea35ce1d0e"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.747260    3377 scope.go:117] "RemoveContainer" containerID="afc4673099e5d0c171adb781d9b2890366b76aebd88506fdcd169c982796c793"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.747371    3377 scope.go:117] "RemoveContainer" containerID="edec890ad5331763619a1058109ef59719931eb1e66170f810b25b86a63bbd3c"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: E0625 16:51:37.893014    3377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-756277?timeout=10s\": dial tcp 192.168.50.163:8443: connect: connection refused" interval="800ms"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: I0625 16:51:37.995984    3377 kubelet_node_status.go:73] "Attempting to register node" node="pause-756277"
	Jun 25 16:51:37 pause-756277 kubelet[3377]: E0625 16:51:37.997018    3377 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.163:8443: connect: connection refused" node="pause-756277"
	Jun 25 16:51:38 pause-756277 kubelet[3377]: I0625 16:51:38.799133    3377 kubelet_node_status.go:73] "Attempting to register node" node="pause-756277"
	Jun 25 16:51:40 pause-756277 kubelet[3377]: I0625 16:51:40.776803    3377 kubelet_node_status.go:112] "Node was previously registered" node="pause-756277"
	Jun 25 16:51:40 pause-756277 kubelet[3377]: I0625 16:51:40.776892    3377 kubelet_node_status.go:76] "Successfully registered node" node="pause-756277"
	Jun 25 16:51:40 pause-756277 kubelet[3377]: I0625 16:51:40.778454    3377 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 25 16:51:40 pause-756277 kubelet[3377]: I0625 16:51:40.779764    3377 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 25 16:51:40 pause-756277 kubelet[3377]: E0625 16:51:40.860985    3377 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"etcd-pause-756277\" already exists" pod="kube-system/etcd-pause-756277"
	Jun 25 16:51:41 pause-756277 kubelet[3377]: I0625 16:51:41.249950    3377 apiserver.go:52] "Watching apiserver"
	Jun 25 16:51:41 pause-756277 kubelet[3377]: I0625 16:51:41.255298    3377 topology_manager.go:215] "Topology Admit Handler" podUID="dc85c133-117a-4389-9f53-32d82b3e40ce" podNamespace="kube-system" podName="kube-proxy-k2flf"
	Jun 25 16:51:41 pause-756277 kubelet[3377]: I0625 16:51:41.255491    3377 topology_manager.go:215] "Topology Admit Handler" podUID="8ddacba2-d039-40c7-8731-ba8e5707cfda" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jsf7r"
	Jun 25 16:51:41 pause-756277 kubelet[3377]: E0625 16:51:41.266038    3377 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-756277\" already exists" pod="kube-system/kube-controller-manager-pause-756277"
	Jun 25 16:51:41 pause-756277 kubelet[3377]: I0625 16:51:41.287366    3377 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 25 16:51:41 pause-756277 kubelet[3377]: I0625 16:51:41.317455    3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc85c133-117a-4389-9f53-32d82b3e40ce-lib-modules\") pod \"kube-proxy-k2flf\" (UID: \"dc85c133-117a-4389-9f53-32d82b3e40ce\") " pod="kube-system/kube-proxy-k2flf"
	Jun 25 16:51:41 pause-756277 kubelet[3377]: I0625 16:51:41.317635    3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc85c133-117a-4389-9f53-32d82b3e40ce-xtables-lock\") pod \"kube-proxy-k2flf\" (UID: \"dc85c133-117a-4389-9f53-32d82b3e40ce\") " pod="kube-system/kube-proxy-k2flf"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-756277 -n pause-756277
helpers_test.go:261: (dbg) Run:  kubectl --context pause-756277 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (95.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7200.051s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (20m7s)
	TestStartStop (20m48s)
	TestStartStop/group/default-k8s-diff-port (15m1s)
	TestStartStop/group/default-k8s-diff-port/serial (15m1s)
	TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (3m19s)
	TestStartStop/group/embed-certs (17m18s)
	TestStartStop/group/embed-certs/serial (17m18s)
	TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (3m43s)
	TestStartStop/group/no-preload (17m20s)
	TestStartStop/group/no-preload/serial (17m20s)
	TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (2m29s)
	TestStartStop/group/old-k8s-version (17m40s)
	TestStartStop/group/old-k8s-version/serial (17m40s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (11m12s)

                                                
                                                
goroutine 3187 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 15 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0002a4b60, 0xc0007f9bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0007862a0, {0x49ea1c0, 0x2b, 0x2b}, {0x26aa1f5?, 0xc00086e900?, 0x4aa69c0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000764be0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000764be0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000796b80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2348 [chan receive, 17 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001948600, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2343
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2579 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36d97d0, 0xc000477730}, {0x36ccee0, 0xc00147a0a0}, 0x1, 0x0, 0xc001ae5c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36d97d0?, 0xc000480070?}, 0x3b9aca00, 0xc001cf5e10?, 0x1, 0xc001cf5c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36d97d0, 0xc000480070}, 0xc001b8c9c0, {0xc001d0c000, 0x1c}, {0x2675963, 0x14}, {0x268d50b, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36d97d0, 0xc000480070}, 0xc001b8c9c0, {0xc001d0c000, 0x1c}, {0x2678849?, 0xc0013bf760?}, {0x551113?, 0x4a16ef?}, {0xc001400000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001b8c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001b8c9c0, 0xc0001ac580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2406
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1681 [chan receive, 22 minutes]:
testing.(*T).Run(0xc0007da000, {0x264f9f6?, 0x55125c?}, 0xc001692210)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0007da000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0007da000, 0x315bb70)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 15 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 14
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 2347 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0014c9a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2343
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2319 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0007dbd40, {0x267b6f6?, 0x60400000004?}, 0xc001474980)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0007dbd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0007dbd40, 0xc0001ac600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1877
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 320 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d9990, 0xc00010e240}, 0xc001905f50, 0xc001541f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d9990, 0xc00010e240}, 0x0?, 0xc001905f50, 0xc001905f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d9990?, 0xc00010e240?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 406
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 406 [chan receive, 76 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000031a00, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 337
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2361 [chan receive, 5 minutes]:
testing.(*T).Run(0xc001b8c820, {0x267b6f6?, 0x60400000004?}, 0xc00050c700)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001b8c820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001b8c820, 0xc000593200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1879
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2428 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00017fdc0, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2437
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2296 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2295
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2468 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2467
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 319 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0000319d0, 0x22)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f080?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00160d380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000031a00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000770d30, {0x36b5a80, 0xc001524a50}, 0x1, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000770d30, 0x3b9aca00, 0x0, 0x1, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 406
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 188 [IO wait, 78 minutes]:
internal/poll.runtime_pollWait(0x7f00564877d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000645b00)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000645b00)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00059a1c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00059a1c0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0000360f0, {0x36cc820, 0xc00059a1c0})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0000360f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x592e24?, 0xc0002a5040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 185
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2565 [syscall, 13 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x118c6, 0xc000875ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001de6600)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001de6600)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0019a8580)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0019a8580)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00002eea0, 0xc0019a8580)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36d97d0, 0xc0004a8000}, 0xc00002eea0, {0xc001928018, 0x16}, {0x0?, 0xc000bec760?}, {0x551113?, 0x4a16ef?}, {0xc000223080, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00002eea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00002eea0, 0xc000592380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2270
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 321 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 320
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1693 [chan receive, 22 minutes]:
testing.(*T).Run(0xc0007da9c0, {0x264f9f6?, 0x551113?}, 0x315bd90)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0007da9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0007da9c0, 0x315bbb8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2665 [IO wait]:
internal/poll.runtime_pollWait(0x7f00552d8d10, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001475680?, 0xc0000cb000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001475680, {0xc0000cb000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc001475680, {0xc0000cb000?, 0xc000614b40?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001526048, {0xc0000cb000?, 0xc0000cb005?, 0x6f?})
	/usr/local/go/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc00189a528, {0xc0000cb000?, 0x0?, 0xc00189a528?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc001a57eb0, {0x36b6220, 0xc00189a528})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001a57c08, {0x7f005530f0d8, 0xc0017f61f8}, 0xc001425980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001a57c08, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc001a57c08, {0xc0013ae000, 0x1000, 0xc001b28540?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc001c87020, {0xc001930200, 0x9, 0x49a5c00?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x36b4700, 0xc001c87020}, {0xc001930200, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc001930200, 0x9, 0x1425dc0?}, {0x36b4700?, 0xc001c87020?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0019301c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001425fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/transport.go:2358 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0007ea780)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/transport.go:2254 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2664
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/transport.go:869 +0xd1b

                                                
                                                
goroutine 2406 [chan receive, 5 minutes]:
testing.(*T).Run(0xc0014f2000, {0x267b6f6?, 0x60400000004?}, 0xc0001ac580)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0014f2000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0014f2000, 0xc00050c300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1876
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1953 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00088f720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0019c8680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0019c8680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0019c8680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0019c8680, 0xc000592c80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1822
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 596 [chan send, 74 minutes]:
os/exec.(*Cmd).watchCtx(0xc001eac840, 0xc001bf8d20)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 328
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1958 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00088f720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0007db520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0007db520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0007db520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0007db520, 0xc00050c980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1822
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1793 [chan receive, 22 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001b8c000, 0x315bd90)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1693
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 405 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00160d4a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 337
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1879 [chan receive, 17 minutes]:
testing.(*T).Run(0xc001b8d380, {0x2650fa1?, 0x0?}, 0xc000593200)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001b8d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001b8d380, 0xc000b9e280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1793
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 449 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc001c02f20, 0xc001c06300)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 448
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1823 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00088f720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0007dab60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0007dab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0007dab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0007dab60, 0xc00050c500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1822
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1875 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc00088f720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b8cd00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b8cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001b8cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001b8cd00, 0xc000b9e140)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1793
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 701 [select, 74 minutes]:
net/http.(*persistConn).readLoop(0xc001a0be60)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 699
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 1952 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00088f720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0019c81a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0019c81a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0019c81a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0019c81a0, 0xc000592980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1822
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1957 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00088f720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0007db380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0007db380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0007db380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0007db380, 0xc00050c900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1822
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 702 [select, 74 minutes]:
net/http.(*persistConn).writeLoop(0xc001a0be60)
	/usr/local/go/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 699
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 1822 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0007da340, 0xc001692210)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1681
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1876 [chan receive, 15 minutes]:
testing.(*T).Run(0xc001b8cea0, {0x2650fa1?, 0x0?}, 0xc00050c300)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001b8cea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001b8cea0, 0xc000b9e180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1793
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2525 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36d97d0, 0xc0004a8770}, {0x36ccee0, 0xc00041bb60}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36d97d0?, 0xc0000ce000?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36d97d0, 0xc0000ce000}, 0xc00002eb60, {0xc0014047b0, 0x12}, {0x2675963, 0x14}, {0x268d50b, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36d97d0, 0xc0000ce000}, 0xc00002eb60, {0xc0014047b0, 0x12}, {0x265cd73?, 0xc001430760?}, {0x551113?, 0x4a16ef?}, {0xc0001b8a00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00002eb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00002eb60, 0xc00050c700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2361
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2294 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0019485d0, 0x3)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f080?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0014c9920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001948600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000610d50, {0x36b5a80, 0xc00194a270}, 0x1, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000610d50, 0x3b9aca00, 0x0, 0x1, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2348
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 534 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc001bf7ce0, 0xc001bf8600)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 533
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2467 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d9990, 0xc00010e240}, 0xc000beaf50, 0xc0013d1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d9990, 0xc00010e240}, 0xf0?, 0xc000beaf50, 0xc000beaf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d9990?, 0xc00010e240?}, 0xc0014f2340?, 0x551a40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0007b9b40?, 0xc001d1e860?, 0xc000beafa8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2428
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1877 [chan receive, 17 minutes]:
testing.(*T).Run(0xc001b8d040, {0x2650fa1?, 0x0?}, 0xc0001ac600)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001b8d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001b8d040, 0xc000b9e1c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1793
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1959 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00088f720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0007db6c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0007db6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0007db6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0007db6c0, 0xc00050ca00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1822
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2270 [chan receive, 13 minutes]:
testing.(*T).Run(0xc0007dba00, {0x265cd89?, 0x60400000004?}, 0xc000592380)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0007dba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0007dba00, 0xc000796080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1874
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1874 [chan receive, 17 minutes]:
testing.(*T).Run(0xc001b8c680, {0x2650fa1?, 0x0?}, 0xc000796080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001b8c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001b8c680, 0xc000b9e100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1793
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2710 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36d97d0, 0xc0003c4ee0}, {0x36ccee0, 0xc00169d0e0}, 0x1, 0x0, 0xc001ae9c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36d97d0?, 0xc0004860e0?}, 0x3b9aca00, 0xc0013e9e10?, 0x1, 0xc0013e9c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36d97d0, 0xc0004860e0}, 0xc0014f2820, {0xc001aae060, 0x11}, {0x2675963, 0x14}, {0x268d50b, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36d97d0, 0xc0004860e0}, 0xc0014f2820, {0xc001aae060, 0x11}, {0x265ab83?, 0xc000093760?}, {0x551113?, 0x4a16ef?}, {0xc000766600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0014f2820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0014f2820, 0xc001474980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2319
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1960 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00088f720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0007db860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0007db860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0007db860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0007db860, 0xc00050ca80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1822
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2553 [IO wait]:
internal/poll.runtime_pollWait(0x7f0056487ac0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0013eec00?, 0xc0009c9800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0013eec00, {0xc0009c9800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0013eec00, {0xc0009c9800?, 0x7f00552d4938?, 0xc00152d650?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001b56060, {0xc0009c9800?, 0xc001544938?, 0x41467b?})
	/usr/local/go/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc00152d650, {0xc0009c9800?, 0x0?, 0xc00152d650?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc001a577b0, {0x36b6220, 0xc00152d650})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001a57508, {0x36b5600, 0xc001b56060}, 0xc001544980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001a57508, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc001a57508, {0xc0008be000, 0x1000, 0xc001b29340?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc0000f5920, {0xc0000342e0, 0x9, 0x49a5c00?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x36b4700, 0xc0000f5920}, {0xc0000342e0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0000342e0, 0x9, 0x1544dc0?}, {0x36b4700?, 0xc0000f5920?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000342a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001544fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/transport.go:2358 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000223500)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/transport.go:2254 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2552
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/transport.go:869 +0xd1b

                                                
                                                
goroutine 2295 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d9990, 0xc00010e240}, 0xc001433750, 0xc0013d5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d9990, 0xc00010e240}, 0x0?, 0xc001433750, 0xc001433798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d9990?, 0xc00010e240?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014337d0?, 0xa122e5?, 0xc0014c9a40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2348
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2566 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x7f00564876e0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0017b09c0?, 0xc00192ab71?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0017b09c0, {0xc00192ab71, 0x48f, 0x48f})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001526158, {0xc00192ab71?, 0x2199be0?, 0x208?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00189e660, {0x36b4520, 0xc001b56310})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b4660, 0xc00189e660}, {0x36b4520, 0xc001b56310}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001526158?, {0x36b4660, 0xc00189e660})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001526158, {0x36b4660, 0xc00189e660})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b4660, 0xc00189e660}, {0x36b4580, 0xc001526158}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000592380?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2565
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2466 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00017fd90, 0x2)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f080?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001961ec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00017fdc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007b9c40, {0x36b5a80, 0xc00189f890}, 0x1, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007b9c40, 0x3b9aca00, 0x0, 0x1, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2428
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2427 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0015fa060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2437
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2635 [IO wait]:
internal/poll.runtime_pollWait(0x7f0056487bb8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001474100?, 0xc0013b2000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001474100, {0xc0013b2000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc001474100, {0xc0013b2000?, 0xc000814780?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001b561e0, {0xc0013b2000?, 0xc0013b205f?, 0x6f?})
	/usr/local/go/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc001534ba0, {0xc0013b2000?, 0x0?, 0xc001534ba0?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc001a57b30, {0x36b6220, 0xc001534ba0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001a57888, {0x7f005530f0d8, 0xc001940840}, 0xc00141c980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001a57888, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc001a57888, {0xc0013dc000, 0x1000, 0xc001516540?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc0014c8000, {0xc000034ac0, 0x9, 0x49a5c00?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x36b4700, 0xc0014c8000}, {0xc000034ac0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc000034ac0, 0x9, 0x141cdc0?}, {0x36b4700?, 0xc0014c8000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc000034a80)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00141cfa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/transport.go:2358 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000bfc300)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/transport.go:2254 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2634
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.26.0/http2/transport.go:869 +0xd1b

                                                
                                                
goroutine 2568 [select, 13 minutes]:
os/exec.(*Cmd).watchCtx(0xc0019a8580, 0xc0017366c0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2565
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2567 [IO wait]:
internal/poll.runtime_pollWait(0x7f0056487da8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0017b0a80?, 0xc001f031bd?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0017b0a80, {0xc001f031bd, 0x14e43, 0x14e43})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001526170, {0xc001f031bd?, 0x45fc89?, 0x3fed6?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00189e690, {0x36b4520, 0xc0006ba708})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b4660, 0xc00189e690}, {0x36b4520, 0xc0006ba708}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001526170?, {0x36b4660, 0xc00189e690})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001526170, {0x36b4660, 0xc00189e690})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b4660, 0xc00189e690}, {0x36b4580, 0xc001526170}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000796100?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2565
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                    

Test pass (163/207)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 52.47
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.2/json-events 13.63
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.06
18 TestDownloadOnly/v1.30.2/DeleteAll 0.13
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.54
22 TestOffline 124.94
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
28 TestCertOptions 45.72
29 TestCertExpiration 288.57
31 TestForceSystemdFlag 74.75
32 TestForceSystemdEnv 52.21
34 TestKVMDriverInstallOrUpdate 5.08
38 TestErrorSpam/setup 47.52
39 TestErrorSpam/start 0.32
40 TestErrorSpam/status 0.71
41 TestErrorSpam/pause 1.48
42 TestErrorSpam/unpause 1.56
43 TestErrorSpam/stop 4.67
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 92.91
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 39.87
50 TestFunctional/serial/KubeContext 0.04
51 TestFunctional/serial/KubectlGetPods 0.08
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.06
55 TestFunctional/serial/CacheCmd/cache/add_local 2.16
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
60 TestFunctional/serial/CacheCmd/cache/delete 0.08
61 TestFunctional/serial/MinikubeKubectlCmd 0.1
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
63 TestFunctional/serial/ExtraConfig 31.84
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 1.29
66 TestFunctional/serial/LogsFileCmd 1.35
67 TestFunctional/serial/InvalidService 4.6
69 TestFunctional/parallel/ConfigCmd 0.31
70 TestFunctional/parallel/DashboardCmd 12.25
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.14
73 TestFunctional/parallel/StatusCmd 1.24
77 TestFunctional/parallel/ServiceCmdConnect 10.61
78 TestFunctional/parallel/AddonsCmd 0.12
79 TestFunctional/parallel/PersistentVolumeClaim 49.44
81 TestFunctional/parallel/SSHCmd 0.42
82 TestFunctional/parallel/CpCmd 1.32
83 TestFunctional/parallel/MySQL 37.6
84 TestFunctional/parallel/FileSync 0.19
85 TestFunctional/parallel/CertSync 1.62
89 TestFunctional/parallel/NodeLabels 0.07
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
93 TestFunctional/parallel/License 0.64
103 TestFunctional/parallel/ServiceCmd/DeployApp 10.19
104 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
105 TestFunctional/parallel/ProfileCmd/profile_list 0.37
106 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
107 TestFunctional/parallel/MountCmd/any-port 8.6
108 TestFunctional/parallel/ServiceCmd/List 0.39
109 TestFunctional/parallel/MountCmd/specific-port 2
110 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
111 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
112 TestFunctional/parallel/ServiceCmd/Format 0.35
113 TestFunctional/parallel/ServiceCmd/URL 0.39
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.8
119 TestFunctional/parallel/ImageCommands/Setup 2.81
120 TestFunctional/parallel/MountCmd/VerifyCleanup 1.58
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.15
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.21
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.29
127 TestFunctional/parallel/Version/short 0.05
128 TestFunctional/parallel/Version/components 0.54
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.49
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 6.24
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.06
133 TestFunctional/delete_addon-resizer_images 0.07
134 TestFunctional/delete_my-image_image 0.01
135 TestFunctional/delete_minikube_cached_images 0.01
139 TestMultiControlPlane/serial/StartCluster 195.54
140 TestMultiControlPlane/serial/DeployApp 6.42
141 TestMultiControlPlane/serial/PingHostFromPods 1.2
142 TestMultiControlPlane/serial/AddWorkerNode 47.6
143 TestMultiControlPlane/serial/NodeLabels 0.06
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
145 TestMultiControlPlane/serial/CopyFile 12.58
147 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.45
149 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
151 TestMultiControlPlane/serial/DeleteSecondaryNode 17.22
152 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
154 TestMultiControlPlane/serial/RestartCluster 358.82
155 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.4
156 TestMultiControlPlane/serial/AddSecondaryNode 71.5
157 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
161 TestJSONOutput/start/Command 56.41
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.68
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.61
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 7.33
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.18
189 TestMainNoArgs 0.04
190 TestMinikubeProfile 84.64
193 TestMountStart/serial/StartWithMountFirst 23.89
194 TestMountStart/serial/VerifyMountFirst 0.34
195 TestMountStart/serial/StartWithMountSecond 24.27
196 TestMountStart/serial/VerifyMountSecond 0.36
197 TestMountStart/serial/DeleteFirst 0.9
198 TestMountStart/serial/VerifyMountPostDelete 0.36
199 TestMountStart/serial/Stop 1.31
200 TestMountStart/serial/RestartStopped 20.91
201 TestMountStart/serial/VerifyMountPostStop 0.36
204 TestMultiNode/serial/FreshStart2Nodes 101.96
205 TestMultiNode/serial/DeployApp2Nodes 5.49
206 TestMultiNode/serial/PingHostFrom2Pods 0.76
207 TestMultiNode/serial/AddNode 37.37
208 TestMultiNode/serial/MultiNodeLabels 0.06
209 TestMultiNode/serial/ProfileList 0.22
210 TestMultiNode/serial/CopyFile 6.93
211 TestMultiNode/serial/StopNode 2.33
212 TestMultiNode/serial/StartAfterStop 29.09
214 TestMultiNode/serial/DeleteNode 2.29
216 TestMultiNode/serial/RestartMultiNode 186.8
217 TestMultiNode/serial/ValidateNameConflict 45.58
224 TestScheduledStopUnix 110.45
228 TestRunningBinaryUpgrade 253.82
233 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
234 TestNoKubernetes/serial/StartWithK8s 98.01
235 TestNoKubernetes/serial/StartWithStopK8s 32.09
236 TestStoppedBinaryUpgrade/Setup 2.63
237 TestStoppedBinaryUpgrade/Upgrade 127.48
238 TestNoKubernetes/serial/Start 51.11
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
240 TestNoKubernetes/serial/ProfileList 10.5
241 TestNoKubernetes/serial/Stop 1.56
242 TestNoKubernetes/serial/StartNoArgs 21.82
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
252 TestPause/serial/Start 111.73
253 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
x
+
TestDownloadOnly/v1.20.0/json-events (52.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-939143 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-939143 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (52.473279284s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (52.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-939143
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-939143: exit status 85 (57.028439ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-939143 | jenkins | v1.33.1 | 25 Jun 24 15:09 UTC |          |
	|         | -p download-only-939143        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/25 15:09:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0625 15:09:23.925160   21250 out.go:291] Setting OutFile to fd 1 ...
	I0625 15:09:23.925370   21250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:09:23.925378   21250 out.go:304] Setting ErrFile to fd 2...
	I0625 15:09:23.925382   21250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:09:23.925531   21250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	W0625 15:09:23.925644   21250 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19128-13846/.minikube/config/config.json: open /home/jenkins/minikube-integration/19128-13846/.minikube/config/config.json: no such file or directory
	I0625 15:09:23.926153   21250 out.go:298] Setting JSON to true
	I0625 15:09:23.927004   21250 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3108,"bootTime":1719325056,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 15:09:23.927059   21250 start.go:139] virtualization: kvm guest
	I0625 15:09:23.929383   21250 out.go:97] [download-only-939143] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0625 15:09:23.929486   21250 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball: no such file or directory
	I0625 15:09:23.929524   21250 notify.go:220] Checking for updates...
	I0625 15:09:23.931063   21250 out.go:169] MINIKUBE_LOCATION=19128
	I0625 15:09:23.932390   21250 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 15:09:23.933529   21250 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:09:23.934721   21250 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:09:23.935806   21250 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0625 15:09:23.937903   21250 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0625 15:09:23.938079   21250 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 15:09:24.034329   21250 out.go:97] Using the kvm2 driver based on user configuration
	I0625 15:09:24.034362   21250 start.go:297] selected driver: kvm2
	I0625 15:09:24.034368   21250 start.go:901] validating driver "kvm2" against <nil>
	I0625 15:09:24.034706   21250 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 15:09:24.034832   21250 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19128-13846/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0625 15:09:24.048747   21250 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0625 15:09:24.048793   21250 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0625 15:09:24.049274   21250 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0625 15:09:24.049418   21250 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0625 15:09:24.049473   21250 cni.go:84] Creating CNI manager for ""
	I0625 15:09:24.049485   21250 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 15:09:24.049492   21250 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0625 15:09:24.049538   21250 start.go:340] cluster config:
	{Name:download-only-939143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-939143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 15:09:24.049687   21250 iso.go:125] acquiring lock: {Name:mk76df652d5e768afc73443035d5ecb8b75ed16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 15:09:24.051518   21250 out.go:97] Downloading VM boot image ...
	I0625 15:09:24.051548   21250 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19128-13846/.minikube/cache/iso/amd64/minikube-v1.33.1-1719245461-19128-amd64.iso
	I0625 15:09:41.760904   21250 out.go:97] Starting "download-only-939143" primary control-plane node in "download-only-939143" cluster
	I0625 15:09:41.760937   21250 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0625 15:09:41.870315   21250 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0625 15:09:41.870352   21250 cache.go:56] Caching tarball of preloaded images
	I0625 15:09:41.870552   21250 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0625 15:09:41.872308   21250 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0625 15:09:41.872329   21250 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0625 15:09:41.982143   21250 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0625 15:09:57.790950   21250 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0625 15:09:57.791045   21250 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0625 15:09:58.694242   21250 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0625 15:09:58.694599   21250 profile.go:143] Saving config to /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/download-only-939143/config.json ...
	I0625 15:09:58.694631   21250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/download-only-939143/config.json: {Name:mk6074e497774aec5b0e2428ace1cadca2be26da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0625 15:09:58.694806   21250 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0625 15:09:58.694994   21250 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19128-13846/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-939143 host does not exist
	  To start a cluster, run: "minikube start -p download-only-939143"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-939143
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (13.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-938243 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-938243 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.63176611s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (13.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-938243
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-938243: exit status 85 (54.852205ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-939143 | jenkins | v1.33.1 | 25 Jun 24 15:09 UTC |                     |
	|         | -p download-only-939143        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 25 Jun 24 15:10 UTC | 25 Jun 24 15:10 UTC |
	| delete  | -p download-only-939143        | download-only-939143 | jenkins | v1.33.1 | 25 Jun 24 15:10 UTC | 25 Jun 24 15:10 UTC |
	| start   | -o=json --download-only        | download-only-938243 | jenkins | v1.33.1 | 25 Jun 24 15:10 UTC |                     |
	|         | -p download-only-938243        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/25 15:10:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0625 15:10:16.700175   21587 out.go:291] Setting OutFile to fd 1 ...
	I0625 15:10:16.700277   21587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:10:16.700288   21587 out.go:304] Setting ErrFile to fd 2...
	I0625 15:10:16.700293   21587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:10:16.700477   21587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 15:10:16.700971   21587 out.go:298] Setting JSON to true
	I0625 15:10:16.701779   21587 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3161,"bootTime":1719325056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 15:10:16.701839   21587 start.go:139] virtualization: kvm guest
	I0625 15:10:16.704057   21587 out.go:97] [download-only-938243] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0625 15:10:16.704205   21587 notify.go:220] Checking for updates...
	I0625 15:10:16.705825   21587 out.go:169] MINIKUBE_LOCATION=19128
	I0625 15:10:16.707317   21587 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 15:10:16.708686   21587 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:10:16.710077   21587 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:10:16.711524   21587 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0625 15:10:16.714055   21587 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0625 15:10:16.714355   21587 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 15:10:16.745294   21587 out.go:97] Using the kvm2 driver based on user configuration
	I0625 15:10:16.745314   21587 start.go:297] selected driver: kvm2
	I0625 15:10:16.745319   21587 start.go:901] validating driver "kvm2" against <nil>
	I0625 15:10:16.745628   21587 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 15:10:16.745701   21587 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19128-13846/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0625 15:10:16.760340   21587 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0625 15:10:16.760396   21587 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0625 15:10:16.760876   21587 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0625 15:10:16.760998   21587 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0625 15:10:16.761018   21587 cni.go:84] Creating CNI manager for ""
	I0625 15:10:16.761025   21587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0625 15:10:16.761036   21587 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0625 15:10:16.761092   21587 start.go:340] cluster config:
	{Name:download-only-938243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-938243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 15:10:16.761177   21587 iso.go:125] acquiring lock: {Name:mk76df652d5e768afc73443035d5ecb8b75ed16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0625 15:10:16.762839   21587 out.go:97] Starting "download-only-938243" primary control-plane node in "download-only-938243" cluster
	I0625 15:10:16.762854   21587 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 15:10:16.874892   21587 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0625 15:10:16.874925   21587 cache.go:56] Caching tarball of preloaded images
	I0625 15:10:16.875098   21587 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0625 15:10:16.877101   21587 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0625 15:10:16.877132   21587 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	I0625 15:10:16.984507   21587 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:cd14409e225276132db5cf7d5d75c2d2 -> /home/jenkins/minikube-integration/19128-13846/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-938243 host does not exist
	  To start a cluster, run: "minikube start -p download-only-938243"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-938243
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-690555 --alsologtostderr --binary-mirror http://127.0.0.1:40549 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-690555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-690555
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (124.94s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-346486 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-346486 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m3.937376775s)
helpers_test.go:175: Cleaning up "offline-crio-346486" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-346486
--- PASS: TestOffline (124.94s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-739670
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-739670: exit status 85 (44.233782ms)

                                                
                                                
-- stdout --
	* Profile "addons-739670" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-739670"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-739670
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-739670: exit status 85 (45.428909ms)

                                                
                                                
-- stdout --
	* Profile "addons-739670" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-739670"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (45.72s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-742979 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-742979 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (44.431185932s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-742979 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-742979 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-742979 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-742979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-742979
--- PASS: TestCertOptions (45.72s)

                                                
                                    
x
+
TestCertExpiration (288.57s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-076008 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-076008 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m16.039304863s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-076008 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-076008 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (31.765703419s)
helpers_test.go:175: Cleaning up "cert-expiration-076008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-076008
--- PASS: TestCertExpiration (288.57s)

                                                
                                    
x
+
TestForceSystemdFlag (74.75s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-740596 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-740596 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m13.766163739s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-740596 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-740596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-740596
--- PASS: TestForceSystemdFlag (74.75s)

                                                
                                    
x
+
TestForceSystemdEnv (52.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-759584 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-759584 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (51.213923858s)
helpers_test.go:175: Cleaning up "force-systemd-env-759584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-759584
--- PASS: TestForceSystemdEnv (52.21s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.08s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0625 16:49:29.127453   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (5.08s)

                                                
                                    
x
+
TestErrorSpam/setup (47.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-530966 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-530966 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-530966 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-530966 --driver=kvm2  --container-runtime=crio: (47.517835276s)
--- PASS: TestErrorSpam/setup (47.52s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (4.67s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 stop: (1.593981738s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 stop: (1.58312296s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-530966 --log_dir /tmp/nospam-530966 stop: (1.489801124s)
--- PASS: TestErrorSpam/stop (4.67s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19128-13846/.minikube/files/etc/test/nested/copy/21239/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (92.91s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-951282 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-951282 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m32.905345593s)
--- PASS: TestFunctional/serial/StartWithProxy (92.91s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-951282 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-951282 --alsologtostderr -v=8: (39.866546163s)
functional_test.go:659: soft start took 39.867365261s for "functional-951282" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-951282 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-951282 cache add registry.k8s.io/pause:3.3: (1.077055623s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-951282 cache add registry.k8s.io/pause:latest: (1.024315955s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-951282 /tmp/TestFunctionalserialCacheCmdcacheadd_local1319207315/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 cache add minikube-local-cache-test:functional-951282
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-951282 cache add minikube-local-cache-test:functional-951282: (1.880169518s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 cache delete minikube-local-cache-test:functional-951282
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-951282
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-951282 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (198.85273ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 kubectl -- --context functional-951282 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-951282 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-951282 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-951282 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.84465487s)
functional_test.go:757: restart took 31.844759837s for "functional-951282" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.84s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-951282 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-951282 logs: (1.293500554s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 logs --file /tmp/TestFunctionalserialLogsFileCmd2211349491/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-951282 logs --file /tmp/TestFunctionalserialLogsFileCmd2211349491/001/logs.txt: (1.351274645s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-951282 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-951282
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-951282: exit status 115 (270.719038ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.55:32736 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-951282 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-951282 delete -f testdata/invalidsvc.yaml: (1.126219092s)
--- PASS: TestFunctional/serial/InvalidService (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-951282 config get cpus: exit status 14 (53.548157ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-951282 config get cpus: exit status 14 (47.888764ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-951282 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-951282 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 34979: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-951282 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-951282 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (129.101593ms)

                                                
                                                
-- stdout --
	* [functional-951282] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 15:54:41.236037   34590 out.go:291] Setting OutFile to fd 1 ...
	I0625 15:54:41.236167   34590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:54:41.236177   34590 out.go:304] Setting ErrFile to fd 2...
	I0625 15:54:41.236183   34590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:54:41.236354   34590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 15:54:41.236826   34590 out.go:298] Setting JSON to false
	I0625 15:54:41.237770   34590 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5825,"bootTime":1719325056,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 15:54:41.237827   34590 start.go:139] virtualization: kvm guest
	I0625 15:54:41.239806   34590 out.go:177] * [functional-951282] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0625 15:54:41.241164   34590 out.go:177]   - MINIKUBE_LOCATION=19128
	I0625 15:54:41.241222   34590 notify.go:220] Checking for updates...
	I0625 15:54:41.244885   34590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 15:54:41.246063   34590 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:54:41.247212   34590 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:54:41.248359   34590 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0625 15:54:41.249483   34590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0625 15:54:41.250967   34590 config.go:182] Loaded profile config "functional-951282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:54:41.251367   34590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:54:41.251421   34590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:54:41.266446   34590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I0625 15:54:41.266859   34590 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:54:41.267454   34590 main.go:141] libmachine: Using API Version  1
	I0625 15:54:41.267474   34590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:54:41.267756   34590 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:54:41.267957   34590 main.go:141] libmachine: (functional-951282) Calling .DriverName
	I0625 15:54:41.268236   34590 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 15:54:41.268649   34590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:54:41.268691   34590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:54:41.283515   34590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43239
	I0625 15:54:41.283937   34590 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:54:41.284424   34590 main.go:141] libmachine: Using API Version  1
	I0625 15:54:41.284446   34590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:54:41.284798   34590 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:54:41.285054   34590 main.go:141] libmachine: (functional-951282) Calling .DriverName
	I0625 15:54:41.318509   34590 out.go:177] * Using the kvm2 driver based on existing profile
	I0625 15:54:41.319620   34590 start.go:297] selected driver: kvm2
	I0625 15:54:41.319633   34590 start.go:901] validating driver "kvm2" against &{Name:functional-951282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-951282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 15:54:41.319741   34590 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0625 15:54:41.321475   34590 out.go:177] 
	W0625 15:54:41.322596   34590 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0625 15:54:41.323647   34590 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-951282 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-951282 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-951282 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (142.388233ms)

                                                
                                                
-- stdout --
	* [functional-951282] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 15:54:41.107570   34563 out.go:291] Setting OutFile to fd 1 ...
	I0625 15:54:41.107674   34563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:54:41.107682   34563 out.go:304] Setting ErrFile to fd 2...
	I0625 15:54:41.107686   34563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 15:54:41.107964   34563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 15:54:41.108594   34563 out.go:298] Setting JSON to false
	I0625 15:54:41.109883   34563 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5825,"bootTime":1719325056,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0625 15:54:41.109964   34563 start.go:139] virtualization: kvm guest
	I0625 15:54:41.112366   34563 out.go:177] * [functional-951282] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0625 15:54:41.114182   34563 out.go:177]   - MINIKUBE_LOCATION=19128
	I0625 15:54:41.114200   34563 notify.go:220] Checking for updates...
	I0625 15:54:41.116511   34563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0625 15:54:41.117734   34563 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	I0625 15:54:41.119032   34563 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	I0625 15:54:41.120225   34563 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0625 15:54:41.121375   34563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0625 15:54:41.122947   34563 config.go:182] Loaded profile config "functional-951282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 15:54:41.123358   34563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:54:41.123420   34563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:54:41.137850   34563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36205
	I0625 15:54:41.138194   34563 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:54:41.138734   34563 main.go:141] libmachine: Using API Version  1
	I0625 15:54:41.138754   34563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:54:41.139059   34563 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:54:41.139272   34563 main.go:141] libmachine: (functional-951282) Calling .DriverName
	I0625 15:54:41.139528   34563 driver.go:392] Setting default libvirt URI to qemu:///system
	I0625 15:54:41.139814   34563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 15:54:41.139853   34563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 15:54:41.154081   34563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33795
	I0625 15:54:41.154489   34563 main.go:141] libmachine: () Calling .GetVersion
	I0625 15:54:41.154965   34563 main.go:141] libmachine: Using API Version  1
	I0625 15:54:41.154992   34563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 15:54:41.155290   34563 main.go:141] libmachine: () Calling .GetMachineName
	I0625 15:54:41.155466   34563 main.go:141] libmachine: (functional-951282) Calling .DriverName
	I0625 15:54:41.187347   34563 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0625 15:54:41.188546   34563 start.go:297] selected driver: kvm2
	I0625 15:54:41.188572   34563 start.go:901] validating driver "kvm2" against &{Name:functional-951282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19128/minikube-v1.33.1-1719245461-19128-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719245474-19128@sha256:bbc3efa956ea8111b9efadfdfeff422d4c4d743a7756ec7b74c70abade56ceb8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-951282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0625 15:54:41.188757   34563 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0625 15:54:41.191055   34563 out.go:177] 
	W0625 15:54:41.192236   34563 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0625 15:54:41.193396   34563 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-951282 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-951282 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-9pb8h" [3b1af8fc-ba8e-4e9b-9268-a139dc6ff404] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-9pb8h" [3b1af8fc-ba8e-4e9b-9268-a139dc6ff404] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003467698s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.55:31499
functional_test.go:1671: http://192.168.39.55:31499: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-9pb8h

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.55:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.55:31499
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4513d5bd-ef70-464f-8633-72c69982f516] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004995372s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-951282 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-951282 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-951282 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-951282 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [509879f6-1c32-480c-ab59-4aef47cb2266] Pending
helpers_test.go:344: "sp-pod" [509879f6-1c32-480c-ab59-4aef47cb2266] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [509879f6-1c32-480c-ab59-4aef47cb2266] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.003233272s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-951282 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-951282 delete -f testdata/storage-provisioner/pod.yaml
2024/06/25 15:54:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-951282 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [aa654367-7fcd-4f02-8152-c80aeca837ce] Pending
helpers_test.go:344: "sp-pod" [aa654367-7fcd-4f02-8152-c80aeca837ce] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [aa654367-7fcd-4f02-8152-c80aeca837ce] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.003952129s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-951282 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh -n functional-951282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 cp functional-951282:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2254999330/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh -n functional-951282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh -n functional-951282 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (37.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-951282 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-8nddq" [9a8a7ff4-f836-4cfa-a2b5-303e4f055972] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-8nddq" [9a8a7ff4-f836-4cfa-a2b5-303e4f055972] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 35.003323394s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-951282 exec mysql-64454c8b5c-8nddq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-951282 exec mysql-64454c8b5c-8nddq -- mysql -ppassword -e "show databases;": exit status 1 (132.26988ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-951282 exec mysql-64454c8b5c-8nddq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-951282 exec mysql-64454c8b5c-8nddq -- mysql -ppassword -e "show databases;": exit status 1 (134.488056ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-951282 exec mysql-64454c8b5c-8nddq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (37.60s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/21239/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "sudo cat /etc/test/nested/copy/21239/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/21239.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "sudo cat /etc/ssl/certs/21239.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/21239.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "sudo cat /usr/share/ca-certificates/21239.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/212392.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "sudo cat /etc/ssl/certs/212392.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/212392.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "sudo cat /usr/share/ca-certificates/212392.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-951282 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-951282 ssh "sudo systemctl is-active docker": exit status 1 (231.416589ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-951282 ssh "sudo systemctl is-active containerd": exit status 1 (231.180898ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-951282 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-951282 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-xmkx4" [a1f3b6ce-1edc-44ed-b7e3-23a40f045cb8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-xmkx4" [a1f3b6ce-1edc-44ed-b7e3-23a40f045cb8] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.00464834s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "321.461861ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "45.237054ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "281.296986ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "45.967672ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-951282 /tmp/TestFunctionalparallelMountCmdany-port2817739734/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1719330871085978787" to /tmp/TestFunctionalparallelMountCmdany-port2817739734/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1719330871085978787" to /tmp/TestFunctionalparallelMountCmdany-port2817739734/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1719330871085978787" to /tmp/TestFunctionalparallelMountCmdany-port2817739734/001/test-1719330871085978787
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-951282 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.793417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 25 15:54 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 25 15:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 25 15:54 test-1719330871085978787
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh cat /mount-9p/test-1719330871085978787
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-951282 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [514e251d-115a-42b8-b26b-b6420deb5d82] Pending
helpers_test.go:344: "busybox-mount" [514e251d-115a-42b8-b26b-b6420deb5d82] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [514e251d-115a-42b8-b26b-b6420deb5d82] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [514e251d-115a-42b8-b26b-b6420deb5d82] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003750184s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-951282 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-951282 /tmp/TestFunctionalparallelMountCmdany-port2817739734/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-951282 /tmp/TestFunctionalparallelMountCmdspecific-port2722254158/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-951282 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (259.249052ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-951282 /tmp/TestFunctionalparallelMountCmdspecific-port2722254158/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-951282 ssh "sudo umount -f /mount-9p": exit status 1 (269.741199ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-951282 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-951282 /tmp/TestFunctionalparallelMountCmdspecific-port2722254158/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 service list -o json
functional_test.go:1490: Took "330.752805ms" to run "out/minikube-linux-amd64 -p functional-951282 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.55:31119
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.55:31119
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-951282 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-951282
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-951282
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240513-cd2ac642
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-951282 image ls --format short --alsologtostderr:
I0625 15:55:11.043039   35794 out.go:291] Setting OutFile to fd 1 ...
I0625 15:55:11.043296   35794 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0625 15:55:11.043306   35794 out.go:304] Setting ErrFile to fd 2...
I0625 15:55:11.043312   35794 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0625 15:55:11.043534   35794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
I0625 15:55:11.044115   35794 config.go:182] Loaded profile config "functional-951282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0625 15:55:11.044209   35794 config.go:182] Loaded profile config "functional-951282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0625 15:55:11.044554   35794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0625 15:55:11.044609   35794 main.go:141] libmachine: Launching plugin server for driver kvm2
I0625 15:55:11.058370   35794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41947
I0625 15:55:11.060087   35794 main.go:141] libmachine: () Calling .GetVersion
I0625 15:55:11.060574   35794 main.go:141] libmachine: Using API Version  1
I0625 15:55:11.060597   35794 main.go:141] libmachine: () Calling .SetConfigRaw
I0625 15:55:11.060943   35794 main.go:141] libmachine: () Calling .GetMachineName
I0625 15:55:11.061372   35794 main.go:141] libmachine: (functional-951282) Calling .GetState
I0625 15:55:11.063155   35794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0625 15:55:11.063215   35794 main.go:141] libmachine: Launching plugin server for driver kvm2
I0625 15:55:11.076488   35794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44125
I0625 15:55:11.076857   35794 main.go:141] libmachine: () Calling .GetVersion
I0625 15:55:11.077298   35794 main.go:141] libmachine: Using API Version  1
I0625 15:55:11.077322   35794 main.go:141] libmachine: () Calling .SetConfigRaw
I0625 15:55:11.077765   35794 main.go:141] libmachine: () Calling .GetMachineName
I0625 15:55:11.077955   35794 main.go:141] libmachine: (functional-951282) Calling .DriverName
I0625 15:55:11.078169   35794 ssh_runner.go:195] Run: systemctl --version
I0625 15:55:11.078195   35794 main.go:141] libmachine: (functional-951282) Calling .GetSSHHostname
I0625 15:55:11.081175   35794 main.go:141] libmachine: (functional-951282) DBG | domain functional-951282 has defined MAC address 52:54:00:fb:d4:f4 in network mk-functional-951282
I0625 15:55:11.081581   35794 main.go:141] libmachine: (functional-951282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:d4:f4", ip: ""} in network mk-functional-951282: {Iface:virbr1 ExpiryTime:2024-06-25 16:51:43 +0000 UTC Type:0 Mac:52:54:00:fb:d4:f4 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:functional-951282 Clientid:01:52:54:00:fb:d4:f4}
I0625 15:55:11.081609   35794 main.go:141] libmachine: (functional-951282) DBG | domain functional-951282 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:d4:f4 in network mk-functional-951282
I0625 15:55:11.081720   35794 main.go:141] libmachine: (functional-951282) Calling .GetSSHPort
I0625 15:55:11.081868   35794 main.go:141] libmachine: (functional-951282) Calling .GetSSHKeyPath
I0625 15:55:11.082043   35794 main.go:141] libmachine: (functional-951282) Calling .GetSSHUsername
I0625 15:55:11.082164   35794 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/functional-951282/id_rsa Username:docker}
I0625 15:55:11.165588   35794 ssh_runner.go:195] Run: sudo crictl images --output json
I0625 15:55:11.217678   35794 main.go:141] libmachine: Making call to close driver server
I0625 15:55:11.217687   35794 main.go:141] libmachine: (functional-951282) Calling .Close
I0625 15:55:11.217955   35794 main.go:141] libmachine: Successfully made call to close driver server
I0625 15:55:11.217981   35794 main.go:141] libmachine: Making call to close connection to plugin binary
I0625 15:55:11.217992   35794 main.go:141] libmachine: Making call to close driver server
I0625 15:55:11.218006   35794 main.go:141] libmachine: (functional-951282) Calling .Close
I0625 15:55:11.218012   35794 main.go:141] libmachine: (functional-951282) DBG | Closing plugin on server side
I0625 15:55:11.218222   35794 main.go:141] libmachine: Successfully made call to close driver server
I0625 15:55:11.218244   35794 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-951282 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/google-containers/addon-resizer  | functional-951282  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-apiserver          | v1.30.2            | 56ce0fd9fb532 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.2            | e874818b3caac | 112MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240513-cd2ac642 | ac1c61439df46 | 65.9MB |
| docker.io/library/nginx                 | latest             | e0c9858e10ed8 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-951282  | 90fe88dd585b5 | 3.33kB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.30.2            | 53c535741fb44 | 86MB   |
| registry.k8s.io/kube-scheduler          | v1.30.2            | 7820c83aa1394 | 63.1MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-951282 image ls --format table --alsologtostderr:
I0625 15:55:11.266144   35846 out.go:291] Setting OutFile to fd 1 ...
I0625 15:55:11.266387   35846 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0625 15:55:11.266412   35846 out.go:304] Setting ErrFile to fd 2...
I0625 15:55:11.266423   35846 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0625 15:55:11.266637   35846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
I0625 15:55:11.267149   35846 config.go:182] Loaded profile config "functional-951282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0625 15:55:11.267258   35846 config.go:182] Loaded profile config "functional-951282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0625 15:55:11.267589   35846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0625 15:55:11.267650   35846 main.go:141] libmachine: Launching plugin server for driver kvm2
I0625 15:55:11.281298   35846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
I0625 15:55:11.281683   35846 main.go:141] libmachine: () Calling .GetVersion
I0625 15:55:11.282240   35846 main.go:141] libmachine: Using API Version  1
I0625 15:55:11.282259   35846 main.go:141] libmachine: () Calling .SetConfigRaw
I0625 15:55:11.282853   35846 main.go:141] libmachine: () Calling .GetMachineName
I0625 15:55:11.283069   35846 main.go:141] libmachine: (functional-951282) Calling .GetState
I0625 15:55:11.284592   35846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0625 15:55:11.284624   35846 main.go:141] libmachine: Launching plugin server for driver kvm2
I0625 15:55:11.297454   35846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43677
I0625 15:55:11.297888   35846 main.go:141] libmachine: () Calling .GetVersion
I0625 15:55:11.298369   35846 main.go:141] libmachine: Using API Version  1
I0625 15:55:11.298389   35846 main.go:141] libmachine: () Calling .SetConfigRaw
I0625 15:55:11.298734   35846 main.go:141] libmachine: () Calling .GetMachineName
I0625 15:55:11.298915   35846 main.go:141] libmachine: (functional-951282) Calling .DriverName
I0625 15:55:11.299101   35846 ssh_runner.go:195] Run: systemctl --version
I0625 15:55:11.299130   35846 main.go:141] libmachine: (functional-951282) Calling .GetSSHHostname
I0625 15:55:11.301889   35846 main.go:141] libmachine: (functional-951282) DBG | domain functional-951282 has defined MAC address 52:54:00:fb:d4:f4 in network mk-functional-951282
I0625 15:55:11.302320   35846 main.go:141] libmachine: (functional-951282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:d4:f4", ip: ""} in network mk-functional-951282: {Iface:virbr1 ExpiryTime:2024-06-25 16:51:43 +0000 UTC Type:0 Mac:52:54:00:fb:d4:f4 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:functional-951282 Clientid:01:52:54:00:fb:d4:f4}
I0625 15:55:11.302355   35846 main.go:141] libmachine: (functional-951282) DBG | domain functional-951282 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:d4:f4 in network mk-functional-951282
I0625 15:55:11.302456   35846 main.go:141] libmachine: (functional-951282) Calling .GetSSHPort
I0625 15:55:11.302612   35846 main.go:141] libmachine: (functional-951282) Calling .GetSSHKeyPath
I0625 15:55:11.302786   35846 main.go:141] libmachine: (functional-951282) Calling .GetSSHUsername
I0625 15:55:11.302926   35846 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/functional-951282/id_rsa Username:docker}
I0625 15:55:11.381349   35846 ssh_runner.go:195] Run: sudo crictl images --output json
I0625 15:55:11.446533   35846 main.go:141] libmachine: Making call to close driver server
I0625 15:55:11.446552   35846 main.go:141] libmachine: (functional-951282) Calling .Close
I0625 15:55:11.446807   35846 main.go:141] libmachine: Successfully made call to close driver server
I0625 15:55:11.446824   35846 main.go:141] libmachine: Making call to close connection to plugin binary
I0625 15:55:11.446831   35846 main.go:141] libmachine: (functional-951282) DBG | Closing plugin on server side
I0625 15:55:11.446833   35846 main.go:141] libmachine: Making call to close driver server
I0625 15:55:11.446867   35846 main.go:141] libmachine: (functional-951282) Calling .Close
I0625 15:55:11.447134   35846 main.go:141] libmachine: Successfully made call to close driver server
I0625 15:55:11.447143   35846 main.go:141] libmachine: Making call to close connection to plugin binary
I0625 15:55:11.447169   35846 main.go:141] libmachine: (functional-951282) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-951282 image ls --format json --alsologtostderr:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-951282"],"size":"34114467"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc","registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"63051080"},{"id":
"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"e0c9858e10ed8be697dc2809db78c57357ffc82de88c69a3dee5d148354679ef","repoDigests":["docker.io/library/nginx@sha256:4e02e85a6f060a8406978fa53aafd2d828d0cedf5259275d191bab9afc33249e","docker.io/library/nginx@sha256:9c367186df9a6b18c6735357b8eb7f407347e84aea09beb184961cb83543d46e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191815842"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[
"gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e","registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"112194888"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b7
8f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f","repoDigests":["docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266","docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"65908273"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8
443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"90fe88dd585b5322a57ec2bec33b97255d882f0a7243da979676156120ecc167","repoDigests":["localhost/minikube-local-cache-test@sha256:0bd512cf5a81f918ce4eb5782e1b85b2d8a772696ddad26c6329d033904880a6"],"repoTags":["localhost/minikube-local-cache-test:functional-951282"],"size":"3330"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":["registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"85953433"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121
f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117609954"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","r
epoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-951282 image ls --format json --alsologtostderr:
I0625 15:55:11.264232   35841 out.go:291] Setting OutFile to fd 1 ...
I0625 15:55:11.264495   35841 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0625 15:55:11.264544   35841 out.go:304] Setting ErrFile to fd 2...
I0625 15:55:11.264559   35841 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0625 15:55:11.264813   35841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
I0625 15:55:11.265505   35841 config.go:182] Loaded profile config "functional-951282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0625 15:55:11.265623   35841 config.go:182] Loaded profile config "functional-951282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0625 15:55:11.266022   35841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0625 15:55:11.266078   35841 main.go:141] libmachine: Launching plugin server for driver kvm2
I0625 15:55:11.280102   35841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44371
I0625 15:55:11.280518   35841 main.go:141] libmachine: () Calling .GetVersion
I0625 15:55:11.281066   35841 main.go:141] libmachine: Using API Version  1
I0625 15:55:11.281084   35841 main.go:141] libmachine: () Calling .SetConfigRaw
I0625 15:55:11.281522   35841 main.go:141] libmachine: () Calling .GetMachineName
I0625 15:55:11.281710   35841 main.go:141] libmachine: (functional-951282) Calling .GetState
I0625 15:55:11.283772   35841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0625 15:55:11.283841   35841 main.go:141] libmachine: Launching plugin server for driver kvm2
I0625 15:55:11.297464   35841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45521
I0625 15:55:11.297894   35841 main.go:141] libmachine: () Calling .GetVersion
I0625 15:55:11.298348   35841 main.go:141] libmachine: Using API Version  1
I0625 15:55:11.298367   35841 main.go:141] libmachine: () Calling .SetConfigRaw
I0625 15:55:11.298773   35841 main.go:141] libmachine: () Calling .GetMachineName
I0625 15:55:11.298986   35841 main.go:141] libmachine: (functional-951282) Calling .DriverName
I0625 15:55:11.299179   35841 ssh_runner.go:195] Run: systemctl --version
I0625 15:55:11.299212   35841 main.go:141] libmachine: (functional-951282) Calling .GetSSHHostname
I0625 15:55:11.302042   35841 main.go:141] libmachine: (functional-951282) DBG | domain functional-951282 has defined MAC address 52:54:00:fb:d4:f4 in network mk-functional-951282
I0625 15:55:11.302508   35841 main.go:141] libmachine: (functional-951282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:d4:f4", ip: ""} in network mk-functional-951282: {Iface:virbr1 ExpiryTime:2024-06-25 16:51:43 +0000 UTC Type:0 Mac:52:54:00:fb:d4:f4 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:functional-951282 Clientid:01:52:54:00:fb:d4:f4}
I0625 15:55:11.302537   35841 main.go:141] libmachine: (functional-951282) DBG | domain functional-951282 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:d4:f4 in network mk-functional-951282
I0625 15:55:11.302763   35841 main.go:141] libmachine: (functional-951282) Calling .GetSSHPort
I0625 15:55:11.302909   35841 main.go:141] libmachine: (functional-951282) Calling .GetSSHKeyPath
I0625 15:55:11.303036   35841 main.go:141] libmachine: (functional-951282) Calling .GetSSHUsername
I0625 15:55:11.303180   35841 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/functional-951282/id_rsa Username:docker}
I0625 15:55:11.381436   35841 ssh_runner.go:195] Run: sudo crictl images --output json
I0625 15:55:11.435495   35841 main.go:141] libmachine: Making call to close driver server
I0625 15:55:11.435509   35841 main.go:141] libmachine: (functional-951282) Calling .Close
I0625 15:55:11.435817   35841 main.go:141] libmachine: Successfully made call to close driver server
I0625 15:55:11.435829   35841 main.go:141] libmachine: (functional-951282) DBG | Closing plugin on server side
I0625 15:55:11.435842   35841 main.go:141] libmachine: Making call to close connection to plugin binary
I0625 15:55:11.435852   35841 main.go:141] libmachine: Making call to close driver server
I0625 15:55:11.435860   35841 main.go:141] libmachine: (functional-951282) Calling .Close
I0625 15:55:11.436074   35841 main.go:141] libmachine: Successfully made call to close driver server
I0625 15:55:11.436087   35841 main.go:141] libmachine: Making call to close connection to plugin binary
I0625 15:55:11.436110   35841 main.go:141] libmachine: (functional-951282) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-951282 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e0c9858e10ed8be697dc2809db78c57357ffc82de88c69a3dee5d148354679ef
repoDigests:
- docker.io/library/nginx@sha256:4e02e85a6f060a8406978fa53aafd2d828d0cedf5259275d191bab9afc33249e
- docker.io/library/nginx@sha256:9c367186df9a6b18c6735357b8eb7f407347e84aea09beb184961cb83543d46e
repoTags:
- docker.io/library/nginx:latest
size: "191815842"
- id: ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f
repoDigests:
- docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "65908273"
- id: 90fe88dd585b5322a57ec2bec33b97255d882f0a7243da979676156120ecc167
repoDigests:
- localhost/minikube-local-cache-test@sha256:0bd512cf5a81f918ce4eb5782e1b85b2d8a772696ddad26c6329d033904880a6
repoTags:
- localhost/minikube-local-cache-test:functional-951282
size: "3330"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
- registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "112194888"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
- registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "63051080"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests:
- registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "85953433"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-951282
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117609954"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-951282 image ls --format yaml --alsologtostderr:
I0625 15:55:11.043249   35795 out.go:291] Setting OutFile to fd 1 ...
I0625 15:55:11.043472   35795 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0625 15:55:11.043480   35795 out.go:304] Setting ErrFile to fd 2...
I0625 15:55:11.043484   35795 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0625 15:55:11.043670   35795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
I0625 15:55:11.044176   35795 config.go:182] Loaded profile config "functional-951282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0625 15:55:11.044263   35795 config.go:182] Loaded profile config "functional-951282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0625 15:55:11.044634   35795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0625 15:55:11.044685   35795 main.go:141] libmachine: Launching plugin server for driver kvm2
I0625 15:55:11.058411   35795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33823
I0625 15:55:11.059488   35795 main.go:141] libmachine: () Calling .GetVersion
I0625 15:55:11.060178   35795 main.go:141] libmachine: Using API Version  1
I0625 15:55:11.060207   35795 main.go:141] libmachine: () Calling .SetConfigRaw
I0625 15:55:11.060567   35795 main.go:141] libmachine: () Calling .GetMachineName
I0625 15:55:11.060756   35795 main.go:141] libmachine: (functional-951282) Calling .GetState
I0625 15:55:11.062966   35795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0625 15:55:11.063027   35795 main.go:141] libmachine: Launching plugin server for driver kvm2
I0625 15:55:11.076581   35795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
I0625 15:55:11.077031   35795 main.go:141] libmachine: () Calling .GetVersion
I0625 15:55:11.077480   35795 main.go:141] libmachine: Using API Version  1
I0625 15:55:11.077501   35795 main.go:141] libmachine: () Calling .SetConfigRaw
I0625 15:55:11.077808   35795 main.go:141] libmachine: () Calling .GetMachineName
I0625 15:55:11.077999   35795 main.go:141] libmachine: (functional-951282) Calling .DriverName
I0625 15:55:11.078194   35795 ssh_runner.go:195] Run: systemctl --version
I0625 15:55:11.078218   35795 main.go:141] libmachine: (functional-951282) Calling .GetSSHHostname
I0625 15:55:11.081430   35795 main.go:141] libmachine: (functional-951282) DBG | domain functional-951282 has defined MAC address 52:54:00:fb:d4:f4 in network mk-functional-951282
I0625 15:55:11.081901   35795 main.go:141] libmachine: (functional-951282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:d4:f4", ip: ""} in network mk-functional-951282: {Iface:virbr1 ExpiryTime:2024-06-25 16:51:43 +0000 UTC Type:0 Mac:52:54:00:fb:d4:f4 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:functional-951282 Clientid:01:52:54:00:fb:d4:f4}
I0625 15:55:11.081933   35795 main.go:141] libmachine: (functional-951282) DBG | domain functional-951282 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:d4:f4 in network mk-functional-951282
I0625 15:55:11.082097   35795 main.go:141] libmachine: (functional-951282) Calling .GetSSHPort
I0625 15:55:11.082254   35795 main.go:141] libmachine: (functional-951282) Calling .GetSSHKeyPath
I0625 15:55:11.082388   35795 main.go:141] libmachine: (functional-951282) Calling .GetSSHUsername
I0625 15:55:11.082501   35795 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/functional-951282/id_rsa Username:docker}
I0625 15:55:11.165533   35795 ssh_runner.go:195] Run: sudo crictl images --output json
I0625 15:55:11.213946   35795 main.go:141] libmachine: Making call to close driver server
I0625 15:55:11.213959   35795 main.go:141] libmachine: (functional-951282) Calling .Close
I0625 15:55:11.214324   35795 main.go:141] libmachine: (functional-951282) DBG | Closing plugin on server side
I0625 15:55:11.214332   35795 main.go:141] libmachine: Successfully made call to close driver server
I0625 15:55:11.214358   35795 main.go:141] libmachine: Making call to close connection to plugin binary
I0625 15:55:11.214375   35795 main.go:141] libmachine: Making call to close driver server
I0625 15:55:11.214389   35795 main.go:141] libmachine: (functional-951282) Calling .Close
I0625 15:55:11.214618   35795 main.go:141] libmachine: (functional-951282) DBG | Closing plugin on server side
I0625 15:55:11.214664   35795 main.go:141] libmachine: Successfully made call to close driver server
I0625 15:55:11.214679   35795 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-951282 ssh pgrep buildkitd: exit status 1 (183.793054ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image build -t localhost/my-image:functional-951282 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-951282 image build -t localhost/my-image:functional-951282 testdata/build --alsologtostderr: (3.408270554s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-951282 image build -t localhost/my-image:functional-951282 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0e94ef04d0c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-951282
--> a77a47dac1c
Successfully tagged localhost/my-image:functional-951282
a77a47dac1c867265b4bcd68c7127036a0ae1e79677d5e6db5a42b5232685899
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-951282 image build -t localhost/my-image:functional-951282 testdata/build --alsologtostderr:
I0625 15:55:11.664808   35917 out.go:291] Setting OutFile to fd 1 ...
I0625 15:55:11.665067   35917 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0625 15:55:11.665076   35917 out.go:304] Setting ErrFile to fd 2...
I0625 15:55:11.665080   35917 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0625 15:55:11.665267   35917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
I0625 15:55:11.665802   35917 config.go:182] Loaded profile config "functional-951282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0625 15:55:11.666384   35917 config.go:182] Loaded profile config "functional-951282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0625 15:55:11.666798   35917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0625 15:55:11.666851   35917 main.go:141] libmachine: Launching plugin server for driver kvm2
I0625 15:55:11.681164   35917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
I0625 15:55:11.681534   35917 main.go:141] libmachine: () Calling .GetVersion
I0625 15:55:11.682005   35917 main.go:141] libmachine: Using API Version  1
I0625 15:55:11.682027   35917 main.go:141] libmachine: () Calling .SetConfigRaw
I0625 15:55:11.682331   35917 main.go:141] libmachine: () Calling .GetMachineName
I0625 15:55:11.682550   35917 main.go:141] libmachine: (functional-951282) Calling .GetState
I0625 15:55:11.684265   35917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0625 15:55:11.684300   35917 main.go:141] libmachine: Launching plugin server for driver kvm2
I0625 15:55:11.699252   35917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43455
I0625 15:55:11.699650   35917 main.go:141] libmachine: () Calling .GetVersion
I0625 15:55:11.700084   35917 main.go:141] libmachine: Using API Version  1
I0625 15:55:11.700107   35917 main.go:141] libmachine: () Calling .SetConfigRaw
I0625 15:55:11.700428   35917 main.go:141] libmachine: () Calling .GetMachineName
I0625 15:55:11.700628   35917 main.go:141] libmachine: (functional-951282) Calling .DriverName
I0625 15:55:11.700840   35917 ssh_runner.go:195] Run: systemctl --version
I0625 15:55:11.700864   35917 main.go:141] libmachine: (functional-951282) Calling .GetSSHHostname
I0625 15:55:11.703940   35917 main.go:141] libmachine: (functional-951282) DBG | domain functional-951282 has defined MAC address 52:54:00:fb:d4:f4 in network mk-functional-951282
I0625 15:55:11.704325   35917 main.go:141] libmachine: (functional-951282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:d4:f4", ip: ""} in network mk-functional-951282: {Iface:virbr1 ExpiryTime:2024-06-25 16:51:43 +0000 UTC Type:0 Mac:52:54:00:fb:d4:f4 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:functional-951282 Clientid:01:52:54:00:fb:d4:f4}
I0625 15:55:11.704356   35917 main.go:141] libmachine: (functional-951282) DBG | domain functional-951282 has defined IP address 192.168.39.55 and MAC address 52:54:00:fb:d4:f4 in network mk-functional-951282
I0625 15:55:11.704543   35917 main.go:141] libmachine: (functional-951282) Calling .GetSSHPort
I0625 15:55:11.704709   35917 main.go:141] libmachine: (functional-951282) Calling .GetSSHKeyPath
I0625 15:55:11.704853   35917 main.go:141] libmachine: (functional-951282) Calling .GetSSHUsername
I0625 15:55:11.705005   35917 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/functional-951282/id_rsa Username:docker}
I0625 15:55:11.781073   35917 build_images.go:161] Building image from path: /tmp/build.2610224039.tar
I0625 15:55:11.781135   35917 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0625 15:55:11.792054   35917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2610224039.tar
I0625 15:55:11.796316   35917 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2610224039.tar: stat -c "%s %y" /var/lib/minikube/build/build.2610224039.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2610224039.tar': No such file or directory
I0625 15:55:11.796337   35917 ssh_runner.go:362] scp /tmp/build.2610224039.tar --> /var/lib/minikube/build/build.2610224039.tar (3072 bytes)
I0625 15:55:11.820777   35917 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2610224039
I0625 15:55:11.830925   35917 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2610224039 -xf /var/lib/minikube/build/build.2610224039.tar
I0625 15:55:11.841499   35917 crio.go:315] Building image: /var/lib/minikube/build/build.2610224039
I0625 15:55:11.841563   35917 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-951282 /var/lib/minikube/build/build.2610224039 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0625 15:55:15.007060   35917 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-951282 /var/lib/minikube/build/build.2610224039 --cgroup-manager=cgroupfs: (3.165464103s)
I0625 15:55:15.007148   35917 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2610224039
I0625 15:55:15.017646   35917 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2610224039.tar
I0625 15:55:15.027377   35917 build_images.go:217] Built localhost/my-image:functional-951282 from /tmp/build.2610224039.tar
I0625 15:55:15.027405   35917 build_images.go:133] succeeded building to: functional-951282
I0625 15:55:15.027412   35917 build_images.go:134] failed building to: 
I0625 15:55:15.027434   35917 main.go:141] libmachine: Making call to close driver server
I0625 15:55:15.027449   35917 main.go:141] libmachine: (functional-951282) Calling .Close
I0625 15:55:15.027674   35917 main.go:141] libmachine: (functional-951282) DBG | Closing plugin on server side
I0625 15:55:15.027702   35917 main.go:141] libmachine: Successfully made call to close driver server
I0625 15:55:15.027726   35917 main.go:141] libmachine: Making call to close connection to plugin binary
I0625 15:55:15.027744   35917 main.go:141] libmachine: Making call to close driver server
I0625 15:55:15.027757   35917 main.go:141] libmachine: (functional-951282) Calling .Close
I0625 15:55:15.028014   35917 main.go:141] libmachine: Successfully made call to close driver server
I0625 15:55:15.028092   35917 main.go:141] libmachine: (functional-951282) DBG | Closing plugin on server side
I0625 15:55:15.028116   35917 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.786439406s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-951282
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-951282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup122366650/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-951282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup122366650/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-951282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup122366650/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-951282 ssh "findmnt -T" /mount1: exit status 1 (347.71691ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-951282 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-951282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup122366650/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-951282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup122366650/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-951282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup122366650/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image load --daemon gcr.io/google-containers/addon-resizer:functional-951282 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-951282 image load --daemon gcr.io/google-containers/addon-resizer:functional-951282 --alsologtostderr: (3.888178937s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image load --daemon gcr.io/google-containers/addon-resizer:functional-951282 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-951282 image load --daemon gcr.io/google-containers/addon-resizer:functional-951282 --alsologtostderr: (3.916804926s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.108513729s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-951282
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image load --daemon gcr.io/google-containers/addon-resizer:functional-951282 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-951282 image load --daemon gcr.io/google-containers/addon-resizer:functional-951282 --alsologtostderr: (3.66594616s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image save gcr.io/google-containers/addon-resizer:functional-951282 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-951282 image save gcr.io/google-containers/addon-resizer:functional-951282 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.487302997s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-951282 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (6.037977172s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-951282
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-951282 image save --daemon gcr.io/google-containers/addon-resizer:functional-951282 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-951282 image save --daemon gcr.io/google-containers/addon-resizer:functional-951282 --alsologtostderr: (1.028340258s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-951282
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-951282
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-951282
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-951282
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-674765 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-674765 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m14.866612365s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (195.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-674765 -- rollout status deployment/busybox: (4.338690897s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-jx6j4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-qjw4r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-vn65x -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-jx6j4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-qjw4r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-vn65x -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-jx6j4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-qjw4r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-vn65x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-jx6j4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-jx6j4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-qjw4r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-qjw4r -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-vn65x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-674765 -- exec busybox-fc5497c4f-vn65x -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-674765 -v=7 --alsologtostderr
E0625 15:59:29.127453   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
E0625 15:59:29.133067   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
E0625 15:59:29.143291   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
E0625 15:59:29.164337   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
E0625 15:59:29.204566   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
E0625 15:59:29.284848   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
E0625 15:59:29.445736   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
E0625 15:59:29.766736   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
E0625 15:59:30.407696   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
E0625 15:59:31.688791   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
E0625 15:59:34.249524   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-674765 -v=7 --alsologtostderr: (46.782759996s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-674765 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp testdata/cp-test.txt ha-674765:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2213486447/001/cp-test_ha-674765.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765:/home/docker/cp-test.txt ha-674765-m02:/home/docker/cp-test_ha-674765_ha-674765-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m02 "sudo cat /home/docker/cp-test_ha-674765_ha-674765-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765:/home/docker/cp-test.txt ha-674765-m03:/home/docker/cp-test_ha-674765_ha-674765-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m03 "sudo cat /home/docker/cp-test_ha-674765_ha-674765-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765:/home/docker/cp-test.txt ha-674765-m04:/home/docker/cp-test_ha-674765_ha-674765-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765 "sudo cat /home/docker/cp-test.txt"
E0625 15:59:39.370681   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m04 "sudo cat /home/docker/cp-test_ha-674765_ha-674765-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp testdata/cp-test.txt ha-674765-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2213486447/001/cp-test_ha-674765-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765-m02:/home/docker/cp-test.txt ha-674765:/home/docker/cp-test_ha-674765-m02_ha-674765.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765 "sudo cat /home/docker/cp-test_ha-674765-m02_ha-674765.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765-m02:/home/docker/cp-test.txt ha-674765-m03:/home/docker/cp-test_ha-674765-m02_ha-674765-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m03 "sudo cat /home/docker/cp-test_ha-674765-m02_ha-674765-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765-m02:/home/docker/cp-test.txt ha-674765-m04:/home/docker/cp-test_ha-674765-m02_ha-674765-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m04 "sudo cat /home/docker/cp-test_ha-674765-m02_ha-674765-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp testdata/cp-test.txt ha-674765-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2213486447/001/cp-test_ha-674765-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765-m03:/home/docker/cp-test.txt ha-674765:/home/docker/cp-test_ha-674765-m03_ha-674765.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765 "sudo cat /home/docker/cp-test_ha-674765-m03_ha-674765.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765-m03:/home/docker/cp-test.txt ha-674765-m02:/home/docker/cp-test_ha-674765-m03_ha-674765-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m02 "sudo cat /home/docker/cp-test_ha-674765-m03_ha-674765-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765-m03:/home/docker/cp-test.txt ha-674765-m04:/home/docker/cp-test_ha-674765-m03_ha-674765-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m04 "sudo cat /home/docker/cp-test_ha-674765-m03_ha-674765-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp testdata/cp-test.txt ha-674765-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2213486447/001/cp-test_ha-674765-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt ha-674765:/home/docker/cp-test_ha-674765-m04_ha-674765.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765 "sudo cat /home/docker/cp-test_ha-674765-m04_ha-674765.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt ha-674765-m02:/home/docker/cp-test_ha-674765-m04_ha-674765-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m02 "sudo cat /home/docker/cp-test_ha-674765-m04_ha-674765-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 cp ha-674765-m04:/home/docker/cp-test.txt ha-674765-m03:/home/docker/cp-test_ha-674765-m04_ha-674765-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 ssh -n ha-674765-m03 "sudo cat /home/docker/cp-test_ha-674765-m04_ha-674765-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0625 16:02:12.974356   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.450317799s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-674765 node delete m03 -v=7 --alsologtostderr: (16.513212644s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (358.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-674765 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0625 16:14:29.127283   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
E0625 16:15:52.176061   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-674765 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m58.006700394s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (358.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-674765 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-674765 --control-plane -v=7 --alsologtostderr: (1m10.692939604s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-674765 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-222321 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0625 16:19:29.128163   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-222321 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (56.409699075s)
--- PASS: TestJSONOutput/start/Command (56.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-222321 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-222321 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-222321 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-222321 --output=json --user=testUser: (7.330726891s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-456155 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-456155 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (56.988787ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4573027f-e473-44ba-a9e9-b0788e579a03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-456155] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"023c0a3b-4690-41db-8044-b32d01bb007e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19128"}}
	{"specversion":"1.0","id":"304b6ebc-dfd9-40ea-8748-1802ae3c95f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d591502b-fccc-4c40-915e-7ea49ecb63c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig"}}
	{"specversion":"1.0","id":"3eb4e90f-693c-48e3-9ac6-2262d4ae8ae8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube"}}
	{"specversion":"1.0","id":"9dec1ab6-e10b-486e-a2c4-0e6d2fd0dfc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"241f872c-95a5-4c68-a318-45c6ce9a7d98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"20e65312-28a6-489c-a639-d3a6a29484e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-456155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-456155
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (84.64s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-839950 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-839950 --driver=kvm2  --container-runtime=crio: (41.049247846s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-842088 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-842088 --driver=kvm2  --container-runtime=crio: (41.199928875s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-839950
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-842088
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-842088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-842088
helpers_test.go:175: Cleaning up "first-839950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-839950
--- PASS: TestMinikubeProfile (84.64s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-880984 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-880984 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.892838825s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-880984 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-880984 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-897595 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-897595 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.267458491s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-897595 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-897595 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-880984 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-897595 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-897595 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-897595
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-897595: (1.313530278s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.91s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-897595
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-897595: (19.909309733s)
--- PASS: TestMountStart/serial/RestartStopped (20.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-897595 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-897595 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (101.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-552402 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-552402 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m41.572205867s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 status --alsologtostderr
E0625 16:24:29.127574   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/FreshStart2Nodes (101.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-552402 -- rollout status deployment/busybox: (4.079165683s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- exec busybox-fc5497c4f-924w9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- exec busybox-fc5497c4f-97579 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- exec busybox-fc5497c4f-924w9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- exec busybox-fc5497c4f-97579 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- exec busybox-fc5497c4f-924w9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- exec busybox-fc5497c4f-97579 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- exec busybox-fc5497c4f-924w9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- exec busybox-fc5497c4f-924w9 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- exec busybox-fc5497c4f-97579 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-552402 -- exec busybox-fc5497c4f-97579 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (37.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-552402 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-552402 -v 3 --alsologtostderr: (36.825774396s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (37.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-552402 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 cp testdata/cp-test.txt multinode-552402:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 cp multinode-552402:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1179120027/001/cp-test_multinode-552402.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 cp multinode-552402:/home/docker/cp-test.txt multinode-552402-m02:/home/docker/cp-test_multinode-552402_multinode-552402-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402-m02 "sudo cat /home/docker/cp-test_multinode-552402_multinode-552402-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 cp multinode-552402:/home/docker/cp-test.txt multinode-552402-m03:/home/docker/cp-test_multinode-552402_multinode-552402-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402-m03 "sudo cat /home/docker/cp-test_multinode-552402_multinode-552402-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 cp testdata/cp-test.txt multinode-552402-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 cp multinode-552402-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1179120027/001/cp-test_multinode-552402-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 cp multinode-552402-m02:/home/docker/cp-test.txt multinode-552402:/home/docker/cp-test_multinode-552402-m02_multinode-552402.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402 "sudo cat /home/docker/cp-test_multinode-552402-m02_multinode-552402.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 cp multinode-552402-m02:/home/docker/cp-test.txt multinode-552402-m03:/home/docker/cp-test_multinode-552402-m02_multinode-552402-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402-m03 "sudo cat /home/docker/cp-test_multinode-552402-m02_multinode-552402-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 cp testdata/cp-test.txt multinode-552402-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 cp multinode-552402-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1179120027/001/cp-test_multinode-552402-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 cp multinode-552402-m03:/home/docker/cp-test.txt multinode-552402:/home/docker/cp-test_multinode-552402-m03_multinode-552402.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402 "sudo cat /home/docker/cp-test_multinode-552402-m03_multinode-552402.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 cp multinode-552402-m03:/home/docker/cp-test.txt multinode-552402-m02:/home/docker/cp-test_multinode-552402-m03_multinode-552402-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 ssh -n multinode-552402-m02 "sudo cat /home/docker/cp-test_multinode-552402-m03_multinode-552402-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-552402 node stop m03: (1.510263863s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-552402 status: exit status 7 (409.674575ms)

                                                
                                                
-- stdout --
	multinode-552402
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-552402-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-552402-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-552402 status --alsologtostderr: exit status 7 (414.276149ms)

                                                
                                                
-- stdout --
	multinode-552402
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-552402-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-552402-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0625 16:25:22.004857   53364 out.go:291] Setting OutFile to fd 1 ...
	I0625 16:25:22.005088   53364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:25:22.005096   53364 out.go:304] Setting ErrFile to fd 2...
	I0625 16:25:22.005100   53364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0625 16:25:22.005245   53364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19128-13846/.minikube/bin
	I0625 16:25:22.005389   53364 out.go:298] Setting JSON to false
	I0625 16:25:22.005410   53364 mustload.go:65] Loading cluster: multinode-552402
	I0625 16:25:22.005452   53364 notify.go:220] Checking for updates...
	I0625 16:25:22.005765   53364 config.go:182] Loaded profile config "multinode-552402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0625 16:25:22.005779   53364 status.go:255] checking status of multinode-552402 ...
	I0625 16:25:22.006128   53364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:25:22.006188   53364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:25:22.026527   53364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39131
	I0625 16:25:22.026900   53364 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:25:22.027394   53364 main.go:141] libmachine: Using API Version  1
	I0625 16:25:22.027424   53364 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:25:22.027751   53364 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:25:22.027917   53364 main.go:141] libmachine: (multinode-552402) Calling .GetState
	I0625 16:25:22.029370   53364 status.go:330] multinode-552402 host status = "Running" (err=<nil>)
	I0625 16:25:22.029387   53364 host.go:66] Checking if "multinode-552402" exists ...
	I0625 16:25:22.029693   53364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:25:22.029761   53364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:25:22.044895   53364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37959
	I0625 16:25:22.045277   53364 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:25:22.045673   53364 main.go:141] libmachine: Using API Version  1
	I0625 16:25:22.045704   53364 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:25:22.046046   53364 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:25:22.046240   53364 main.go:141] libmachine: (multinode-552402) Calling .GetIP
	I0625 16:25:22.049185   53364 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:25:22.049618   53364 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:25:22.049646   53364 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:25:22.049754   53364 host.go:66] Checking if "multinode-552402" exists ...
	I0625 16:25:22.050028   53364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:25:22.050061   53364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:25:22.065890   53364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40717
	I0625 16:25:22.066243   53364 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:25:22.066683   53364 main.go:141] libmachine: Using API Version  1
	I0625 16:25:22.066708   53364 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:25:22.067058   53364 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:25:22.067264   53364 main.go:141] libmachine: (multinode-552402) Calling .DriverName
	I0625 16:25:22.067464   53364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:25:22.067488   53364 main.go:141] libmachine: (multinode-552402) Calling .GetSSHHostname
	I0625 16:25:22.070114   53364 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:25:22.070573   53364 main.go:141] libmachine: (multinode-552402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8e:1c", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:23:01 +0000 UTC Type:0 Mac:52:54:00:5d:8e:1c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-552402 Clientid:01:52:54:00:5d:8e:1c}
	I0625 16:25:22.070593   53364 main.go:141] libmachine: (multinode-552402) DBG | domain multinode-552402 has defined IP address 192.168.39.231 and MAC address 52:54:00:5d:8e:1c in network mk-multinode-552402
	I0625 16:25:22.070739   53364 main.go:141] libmachine: (multinode-552402) Calling .GetSSHPort
	I0625 16:25:22.070894   53364 main.go:141] libmachine: (multinode-552402) Calling .GetSSHKeyPath
	I0625 16:25:22.071034   53364 main.go:141] libmachine: (multinode-552402) Calling .GetSSHUsername
	I0625 16:25:22.071174   53364 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/multinode-552402/id_rsa Username:docker}
	I0625 16:25:22.155012   53364 ssh_runner.go:195] Run: systemctl --version
	I0625 16:25:22.161361   53364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:25:22.175657   53364 kubeconfig.go:125] found "multinode-552402" server: "https://192.168.39.231:8443"
	I0625 16:25:22.175682   53364 api_server.go:166] Checking apiserver status ...
	I0625 16:25:22.175710   53364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0625 16:25:22.188790   53364 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1116/cgroup
	W0625 16:25:22.197887   53364 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1116/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0625 16:25:22.197928   53364 ssh_runner.go:195] Run: ls
	I0625 16:25:22.202126   53364 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I0625 16:25:22.206932   53364 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I0625 16:25:22.206948   53364 status.go:422] multinode-552402 apiserver status = Running (err=<nil>)
	I0625 16:25:22.206957   53364 status.go:257] multinode-552402 status: &{Name:multinode-552402 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:25:22.206972   53364 status.go:255] checking status of multinode-552402-m02 ...
	I0625 16:25:22.207262   53364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:25:22.207298   53364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:25:22.222347   53364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0625 16:25:22.222830   53364 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:25:22.223366   53364 main.go:141] libmachine: Using API Version  1
	I0625 16:25:22.223394   53364 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:25:22.223732   53364 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:25:22.223917   53364 main.go:141] libmachine: (multinode-552402-m02) Calling .GetState
	I0625 16:25:22.225314   53364 status.go:330] multinode-552402-m02 host status = "Running" (err=<nil>)
	I0625 16:25:22.225333   53364 host.go:66] Checking if "multinode-552402-m02" exists ...
	I0625 16:25:22.225596   53364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:25:22.225625   53364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:25:22.239972   53364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45403
	I0625 16:25:22.240344   53364 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:25:22.240706   53364 main.go:141] libmachine: Using API Version  1
	I0625 16:25:22.240725   53364 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:25:22.241049   53364 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:25:22.241200   53364 main.go:141] libmachine: (multinode-552402-m02) Calling .GetIP
	I0625 16:25:22.243819   53364 main.go:141] libmachine: (multinode-552402-m02) DBG | domain multinode-552402-m02 has defined MAC address 52:54:00:be:33:9f in network mk-multinode-552402
	I0625 16:25:22.244180   53364 main.go:141] libmachine: (multinode-552402-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:33:9f", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:24:03 +0000 UTC Type:0 Mac:52:54:00:be:33:9f Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:multinode-552402-m02 Clientid:01:52:54:00:be:33:9f}
	I0625 16:25:22.244213   53364 main.go:141] libmachine: (multinode-552402-m02) DBG | domain multinode-552402-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:be:33:9f in network mk-multinode-552402
	I0625 16:25:22.244335   53364 host.go:66] Checking if "multinode-552402-m02" exists ...
	I0625 16:25:22.244622   53364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:25:22.244668   53364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:25:22.258840   53364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I0625 16:25:22.259244   53364 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:25:22.259659   53364 main.go:141] libmachine: Using API Version  1
	I0625 16:25:22.259677   53364 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:25:22.259946   53364 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:25:22.260111   53364 main.go:141] libmachine: (multinode-552402-m02) Calling .DriverName
	I0625 16:25:22.260254   53364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0625 16:25:22.260269   53364 main.go:141] libmachine: (multinode-552402-m02) Calling .GetSSHHostname
	I0625 16:25:22.262546   53364 main.go:141] libmachine: (multinode-552402-m02) DBG | domain multinode-552402-m02 has defined MAC address 52:54:00:be:33:9f in network mk-multinode-552402
	I0625 16:25:22.262883   53364 main.go:141] libmachine: (multinode-552402-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:33:9f", ip: ""} in network mk-multinode-552402: {Iface:virbr1 ExpiryTime:2024-06-25 17:24:03 +0000 UTC Type:0 Mac:52:54:00:be:33:9f Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:multinode-552402-m02 Clientid:01:52:54:00:be:33:9f}
	I0625 16:25:22.262911   53364 main.go:141] libmachine: (multinode-552402-m02) DBG | domain multinode-552402-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:be:33:9f in network mk-multinode-552402
	I0625 16:25:22.263049   53364 main.go:141] libmachine: (multinode-552402-m02) Calling .GetSSHPort
	I0625 16:25:22.263207   53364 main.go:141] libmachine: (multinode-552402-m02) Calling .GetSSHKeyPath
	I0625 16:25:22.263366   53364 main.go:141] libmachine: (multinode-552402-m02) Calling .GetSSHUsername
	I0625 16:25:22.263475   53364 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19128-13846/.minikube/machines/multinode-552402-m02/id_rsa Username:docker}
	I0625 16:25:22.346709   53364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0625 16:25:22.361696   53364 status.go:257] multinode-552402-m02 status: &{Name:multinode-552402-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0625 16:25:22.361733   53364 status.go:255] checking status of multinode-552402-m03 ...
	I0625 16:25:22.362058   53364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0625 16:25:22.362099   53364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0625 16:25:22.376861   53364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I0625 16:25:22.377292   53364 main.go:141] libmachine: () Calling .GetVersion
	I0625 16:25:22.377750   53364 main.go:141] libmachine: Using API Version  1
	I0625 16:25:22.377773   53364 main.go:141] libmachine: () Calling .SetConfigRaw
	I0625 16:25:22.378083   53364 main.go:141] libmachine: () Calling .GetMachineName
	I0625 16:25:22.378238   53364 main.go:141] libmachine: (multinode-552402-m03) Calling .GetState
	I0625 16:25:22.379873   53364 status.go:330] multinode-552402-m03 host status = "Stopped" (err=<nil>)
	I0625 16:25:22.379886   53364 status.go:343] host is not running, skipping remaining checks
	I0625 16:25:22.379892   53364 status.go:257] multinode-552402-m03 status: &{Name:multinode-552402-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-552402 node start m03 -v=7 --alsologtostderr: (28.501989766s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-552402 node delete m03: (1.786187585s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (186.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-552402 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0625 16:34:29.129859   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-552402 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m6.276559861s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-552402 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (186.80s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-552402
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-552402-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-552402-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.334173ms)

                                                
                                                
-- stdout --
	* [multinode-552402-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-552402-m02' is duplicated with machine name 'multinode-552402-m02' in profile 'multinode-552402'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-552402-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-552402-m03 --driver=kvm2  --container-runtime=crio: (44.338180764s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-552402
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-552402: exit status 80 (206.49881ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-552402 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-552402-m03 already exists in multinode-552402-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-552402-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.58s)

                                                
                                    
x
+
TestScheduledStopUnix (110.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-030462 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-030462 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.970139041s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-030462 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-030462 -n scheduled-stop-030462
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-030462 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-030462 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-030462 -n scheduled-stop-030462
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-030462
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-030462 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0625 16:44:29.127933   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-030462
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-030462: exit status 7 (60.912478ms)

                                                
                                                
-- stdout --
	scheduled-stop-030462
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-030462 -n scheduled-stop-030462
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-030462 -n scheduled-stop-030462: exit status 7 (63.647193ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-030462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-030462
--- PASS: TestScheduledStopUnix (110.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (253.82s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1146972063 start -p running-upgrade-455917 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1146972063 start -p running-upgrade-455917 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m8.457406298s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-455917 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-455917 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m1.510180772s)
helpers_test.go:175: Cleaning up "running-upgrade-455917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-455917
E0625 16:49:12.177517   21239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19128-13846/.minikube/profiles/functional-951282/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-455917: (1.223986961s)
--- PASS: TestRunningBinaryUpgrade (253.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-387003 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-387003 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (72.522065ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-387003] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19128-13846/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19128-13846/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-387003 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-387003 --driver=kvm2  --container-runtime=crio: (1m37.770461499s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-387003 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (32.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-387003 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-387003 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.344856378s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-387003 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-387003 status -o json: exit status 2 (231.9868ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-387003","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-387003
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-387003: (1.512747145s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (32.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (127.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2962494157 start -p stopped-upgrade-035129 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2962494157 start -p stopped-upgrade-035129 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m7.813313494s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2962494157 -p stopped-upgrade-035129 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2962494157 -p stopped-upgrade-035129 stop: (12.152388558s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-035129 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-035129 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.513755488s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (127.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (51.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-387003 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-387003 --no-kubernetes --driver=kvm2  --container-runtime=crio: (51.112307468s)
--- PASS: TestNoKubernetes/serial/Start (51.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-387003 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-387003 "sudo systemctl is-active --quiet service kubelet": exit status 1 (192.237805ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (10.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (9.755082529s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (10.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-387003
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-387003: (1.561490536s)
--- PASS: TestNoKubernetes/serial/Stop (1.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-387003 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-387003 --driver=kvm2  --container-runtime=crio: (21.823443118s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-387003 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-387003 "sudo systemctl is-active --quiet service kubelet": exit status 1 (191.757279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestPause/serial/Start (111.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-756277 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-756277 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m51.726256259s)
--- PASS: TestPause/serial/Start (111.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-035129
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    

Test skip (32/207)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard